title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Finitely generated group with subexponential growth and surjection onto $\mathbb{Z}$ has finitely generated kernel.
Let $K=\ker \pi$ and, as in your post, let $G = \langle S, g \rangle$, with $S$ a finite subset of $K$. Then $K = \langle g^i S g^{-i} : i \in {\mathbb Z} \rangle$. For $N \ge 0$, let $K_N = \langle g^i S g^{-i} : 0 \le i \le N \rangle$, and for $N \le 0$ let $K_N = \langle g^{-i} S g^i : 0 \le i \le -N \rangle$. So we have two ascending chains of subgroups of $K$: $K_0 \le K_1 \le K_2 \le \cdots$ and $K_0 \le K_{-1} \le K_{-2} \le \cdots $. If both of these chains stabilize after finitely many steps, then $K$ is generated by a finite number of the $K_i$ and hence $K$ is finitely generated, and we are done. So we can assume that one of them does not stabilize, and by swapping $g$ and $g^{-1}$ if necessary, we can assume that $K_0 \le K_1 \le K_2 \le \cdots$ does not stabilize. Suppose that, for some $s \in S$ and $N_s > 0$, we have $$g^{N_s} s g^{-N_s} \in \langle s,gsg^{-1},\ldots,g^{N_s-1}sg^{-(N_s-1)} \rangle.$$ Then the same condition holds for all $N \ge N_s$. If such an $N_s$ existed for all $s \in S$, then the condition would hold for all $N \ge \max(N_s)$ and all $s \in S$. But then we would have $K_n \le K_{n-1}$ for all $n \ge \max(N_s)$, and the chain $K_0 \le K_1 \le K_2 \le \cdots$ would stabilize, contrary to assumption. So there exists $s \in S$, such that $g^n s g^{-n} \notin \langle s, gsg^{-1},\ldots, g^{n-1}sg^{-(n-1)} \rangle$ for all $n > 0$. We claim that the subsemigroup of $G$ generated by $gs$ and $g^2$ is free, which implies that $G$ has exponential growth. If not then let $w_1$ and $w_2$ be distinct (positive) words in $g^2$ and $gs$ of smallest total length such that $w_1 =_G w_2$. Then one of $w_1,w_2$ - say $w_1$ - must end in $gs$ and the other in $g^2$, since otherwise we could cancel the final letters and get shorter equal words. Now, since $\pi(w_1) = \pi(w_2)$, they must both contain the same number of occurrences of $g^2$ and of $gs$ - suppose there is a total of $n$ occurrences of $g$ in both words. Then, when we rewrite $w_1$ and $w_2$ in $G$ to collect the powers of $g$ to the right (as you have done in your post), we get $w_1' g^{n} = w_2'g^n$, where $w_1'$ and $w_2'$ are words in conjugates $g^ksg^{-k}$ of $s$ for $k \ge 0$. Since $w_1$ ends in $gs$, and $w_2$ ends in $g^2$, the largest such $k$ occurring in $w_1'$ will be $g^nsg^{-n}$ at the end of $w'$, but the largest $k$ in $w_2$ will be less than $n$. So we get $g^n s g^{-n} \in \langle s, gsg^{-1},\ldots, g^{n-1}sg^{-(n-1)} \rangle$, contrary to assumption.
Proving the continuation of the Cayley-Hamilton theorem from Schur's triangularization theorem
Take a column vector $v_k:=v$ and split it up into $k$ pieces corresponding to the blocks of the matrix. Since $(\mathbf{T} - \lambda_k \mathbf{I})^{a_k}$ has its lower right block equal to zero, $v_{k-1}:=(\mathbf{T} - \lambda_k \mathbf{I})^{a_k} v_k$ has last piece zero. Then, since $(\mathbf{T} - \lambda_{k-1} \mathbf{I})^{a_{k-1}}$ has a block in its $k-1$st row and $k-1$st column of blocks which is zero, $$ v_{k-2}:=(\mathbf{T} - \lambda_{k-1} \mathbf{I})^{a_{k-1}} v_{k-1}=(\mathbf{T} - \lambda_{k-1} \mathbf{I})^{a_{k-1}}(\mathbf{T} - \lambda_k \mathbf{I})^{a_k} v_k$$ has its $k-1$st and $k$th pieces equal to zero. You can continue in this way for a total of $k$ steps until you find that $$ v_0=\left(\prod_{1\le i\le k}(\mathbf{T} - \lambda_{i} \mathbf{I})^{a_{i}}\right) v=0. $$ Multiplying by $\mathbf{U}$ then gives $$ \left(\prod_{1\le i\le k}(\mathbf{A} - \lambda_{i} \mathbf{I})^{a_{i}}\right) \mathbf{U} v=0, $$ which, since $v$ was arbitrary and $\mathbf{U}$ is invertible, proves that $$ p(\mathbf{A})=\prod_{1\le i\le k}(\mathbf{A} - \lambda_{i} \mathbf{I})^{a_{i}}=\mathbf{0}. $$
What is the order of $(\mathbb{Z} \oplus \mathbb{Z})/ \langle (2,2) \rangle$ and is it cyclic?
You are right, $(\mathbb{Z} \oplus \mathbb{Z})/ \langle (2,2) \rangle$ is infinite. You can embed $\mathbb{Z}$ via $k\mapsto (k,0)$ (and in other ways) into it. The quotient is not cyclic, because it contains elements of finite order, $(1,1)$ for example. Probably it was meant to be $(\mathbb{Z} \oplus \mathbb{Z})/ (\langle 2\rangle\oplus \langle 2 \rangle)$ which indeed is a group of order $4$ (a Klein $4$-group).
If $G$ is nilpotent, then $N(G)>H$ for every $H<G$
It is not the case in general that $G_{k-1} = \{ g \in G : H^g = H \}$. The proof uses the following fact instead: If $[G_{k-1}, G] \leq H$, then $G_{k-1} \leq \{ g \in G : H^g = H \}$. Proving this fact takes about one or two lines and uses nothing but the definitions.
Find the repartition function of a discrete random variable
You may have an error in that I think the cumulative distribution function $F_X(x)=\frac35$ for $1 \le x \lt 2$ rather than your $\frac25$ So you could say $$F_{X}(x)=\left\{\begin{array}{l}0\;,&amp;\;x\;\;&lt;\;0\\\frac15,&amp;\;0\;\leq x&lt;1\\ \frac15+\frac25,&amp;\;1\;\leq\;x&lt;\;2\\\frac15+\frac25+\frac25,&amp;\;x\;\geq\;2\end{array}\right.$$ and you are cumulating all three probability values
Group algebras, Maschke's lemma and direct sums of matrix algebras
You will need Schur's lemma to prove this. A proof reference can be found here, Lemma 9, Theorem 10.
Is there a trick to evaluating a matrix multiplied by the sum of three vectors?
There appear to be some typos. Yes the previous computations are relevant. As in the comment by Avitus, $Ap = (a^2 + 1)p , Aq=q, Ar = 0$. Then $$ \begin{align}x_n = \alpha_n p + \beta_n q + \gamma_n r &amp;= A(\alpha_{n-1}p + \beta_{n-1}q + \gamma_{n-1}r ) + u\\ &amp;= (\alpha_{n-1}(a^2 + 1) + c)p + (\beta_{n-1} + b)q - acr \end{align}$$ Then by comparing the coefficients of $p,q$ and $r$ you see that $\alpha_n = \alpha_{n-1}(a^2 + 1) + c$, $\beta_n = \beta_{n-1} + b$ and $\gamma_n = -ac$.
Integration inequality for a periodic function
Parseval gives $\int |f|^2 = 2 \pi \sum_k |f_k|^2 $, $\int |f''|^2 = 2 \pi \sum_k k^2|f_k|^2 $ (since if $f(t) = \sum_k f_k e^{i kt}$, then $f''(t) = -\sum_k k^2 f_k e^{i kt}$). Since $f_{-1}=f_0=f_1 = 0$ we have $\int |f|^2 = 2 \pi \sum_{|k|\ge 2} |f_k|^2 $ and $\int |f''|^2 = 2 \pi \sum_{|k|\ge 2} k^2|f_k|^2 \ge 2 \pi 4 \sum_{|k|\ge 2} |f_k|^2 = 4\int|f|^2$.
The norm map in group cohomology is an isomorphism if $M$ is a projective $G$-module
This follows immediately from the following two observations: $\overline{N}$ is an isomorphism when $M=\mathbb{Z}[G]$. $\overline{N}$ is an isomorphism for $M=\oplus_\alpha M_\alpha$ if and only if it is an isomorphism for each $M_\alpha$.
Random parking problem on a probability distribution
Here is one: Jean-François Marckert, Parking with density, Random Structures and Algorithms 18 (4), 364-380 (2001). A preprint version is available on this page.
What is $[T]^{\scr{C}}_{\scr{B}}$?
We are considering $m\times n$ matrices over $F$, so $T$ must be a linear transformation from $F^n$ to $F^m$. It's not clear from your question, but I will assume that $$\mathscr{B}=\{\beta_1,\ldots,\beta_n\}$$is a basis of $F^n$ and $$\mathscr{C}=\{\gamma_1,\ldots,\gamma_m\}$$is a basis of $F^m$. (If it's the other way around, just swap $\scr{B}$ and $\scr{C}$ in the rest of my answer.) Then the statement that an element $A\in\mathrm{M}_{m\times n}(F)$ is the matrix of $T$ in the bases $\scr{B}$ and $\scr{C}$ just means that $$A=\begin{bmatrix} a_{11} &amp; \cdots &amp; a_{1n}\\ \vdots &amp; \ddots &amp; \vdots\\ a_{m1} &amp; \cdots &amp; a_{mn} \end{bmatrix}$$ where the $a_{ij}$ are the elements of $F$ uniquely determined by $$T(\beta_i)=\sum_{j=1}^m a_{ij}\gamma_j.$$ This uniquely determined element $A$ is denoted $[T]_{\scr{B}}^{\scr{C}}$.
Find the area on the surface of the parabolic cylinder $z=4-y^2$ on the first octant between the planes $y=x$ and $y=2x$ below $z=3$
$\renewcommand{\dd}[1]{\,\mathrm{d}#1}$In the first octant and below $z = 3$ means $0 &lt; z &lt; 3$. Plugin the surface $z = 4 - y^2$ we have $$0 &lt; 4 - y^2 &lt; 3 \implies 1 &lt; y &lt; 2$$ Between $y = x$ and $y = 2x$ gives the interval $\frac{y}2 &lt; x &lt; y$. One can easily make a sketch and see that the region of integration is a trapezoid. Anyway, even without the visual aid, we have algebraically $$\int_{y = 1}^2\int_{x=y/2}^y \sqrt{1 + \Bigl(\frac{\partial f}{\partial x}\Bigr)^2 + \Bigl(\frac{\partial f}{\partial y}\Bigr)^2}\dd{y} \dd{x}$$ with $f(x,y) = z(x,y) = 4-y^2$ . That's the setting-up. Can you take it from here?
How to find the equation of a circle that shares a tangent with another circle of known centre and radius?
Here is a "mathematical" way in which we do not need to draw a diagram: $\left.\right.$ Suppose the equation of the small circle is $$(x-a)^2+(y-b)^2=(2\sqrt2)^2$$ Then differentiate both sides w.r.t $\,x$, and we have $$2(x-a)+2(y-b)\frac{dy}{dx}=0$$ Since the tangent point is $(-1,4)$ and the the slope of the tangent line is $-1$, we have $$2(-1-a)+2(4-b)(-1)=0$$ $$\Rightarrow\ \ b=5+a$$ Back to the equation of the small circle, $$(-1-a)^2+(4-(a+5))^2=8$$ $$\Rightarrow\quad 2(a+1)^2=8\quad$$ $$\Rightarrow\quad a=-3,1\ \ \ \&amp;\ \ \ b=2,6$$ Thus, the equation of the small circle is: $$(x+3)^2+(y-2)^2=8$$ $$\text{or}\quad\ (x-1)^2+(y-6)^2=8\qquad$$
Conditional Expectation
This is to establish the following for Borel set $B\subseteq [0,1]$, $$\begin{align} \mathsf P(Y\in B) &amp; = \iint_{Y^{-1}(B)} f_{X,Y}(x,y) \mathrm dx\mathrm dy \\ &amp; = \iint_{[0;1]\times B} f_{X,Y}(x,y) \mathrm dx\mathrm dy \\ &amp; = \int_B \int_0^1 (x+y) \mathrm dx\mathrm dy \\ &amp; =\int_B (y + \frac 1 2 ) \mathrm dy. \end{align}$$ Thus, the marginal PDF $f_Y(y)$ is $y+\frac 1 2$ for $y\in [0;1]$, and $0$ otherwise. In fact, the joint density of $X$ and $Y$ tells us how $X: \Omega\rightarrow \mathbb{R}$ and $Y:\Omega\rightarrow \mathbb{R}$ are distributed with $X(x,y)=x$ and $Y(x,y)=y$. We have for any Borel set $A\subseteq [0,1]\times [0,1]$, $$P((X,Y)\in A) = \iint_A f_{X,Y}(x,y) \mathrm dx\mathrm dy.$$ Thus, $$\begin{align} (x,y)\in Y^{-1}(B) &amp; \iff Y(x,y) \in B \\ &amp; \iff x\in [0,1] \wedge y\in B \\ &amp; \iff (x,y)\in [0;1]\times B \end{align}$$
$(a_n)_{n \geq 1}=\mathbb{Q}_+$ and $\sqrt[n]{a_n}$ is convergent
Actually the standard one $$ (a_n) = \left( \frac 11, \frac 21, \frac 12, \frac 31, \frac 13, \frac 41, \frac 32, \frac 23, \frac 14, \cdots\right) $$ works. The observation is that for the members $a_n$ in the $i$-th layer: $$ \frac i1, \frac{i-1}{2}, \cdots, \frac{2}{i-1}, \frac 1i,$$ we have $$ i \ge a_n \ge i^{-1}\Rightarrow \sqrt[n]{i} \ge \sqrt[n]{a_n} \ge (\sqrt[n]{i})^{-1}.$$ But clearly $n\ge i$, so $$ \sqrt[n]{n} \ge \sqrt[n]{a_n} \ge (\sqrt[n]{n})^{-1}.$$ So $\sqrt[n]{a_n} \to 1$ as $\sqrt[n]{n} \to 1$.
Definition for convexity for a function defined by cartisian product
$$g(s x_1 + (1-s) x_2, s y_1 + (1-s) y_2) \le s g(x_1, y_1) + (1-s) g(x_2, y_2) \ \text{for}\ s \in [0,1]$$ Sometimes this is called &quot;jointly convex&quot; to distinguish this from &quot;separately convex&quot; (i.e. convex in $x$ for fixed $y$ and convex in $y$ for fixed $x$).
Simplify this indices?
Hint: Now divide $6$ by $2$ and use the laws of exponents on the terms with $a$.
Evaluating a limit $\lim_{x\to 0}\left\{\dfrac 2{x^3}(\tan x- \sin x )\right\}^{2/x^2 }$
Since $\sin x\approx x-\frac{x^3}{6}+\frac{x^5}{120}$ and $\tan x\approx x+\frac{x^3}{3}+\frac{2x^5}{15}$, $\frac{2}{x^3}(\tan x-\sin x)\approx \frac{2}{x^3}(\frac{x^3}{3}+\frac{2x^5}{15}+\frac{x^3}{6}-\frac{x^5}{120})=1+\frac{x^2}{4}\approx\exp\frac{x^2}{4}$. The limit is therefore $\sqrt{e}$.
Find kernel of a linear transformation if it possible.
I guess the problem here is to check whether $F$ is well-defined in the first place. Normally, if a linear function (in particular a linear operator) is defined on a basis, it uniquely gets extended to the whole domain. In our case, it is defined on the vectors $(3,2,2), (1,2,1), (2,3,2)$, so all it takes is to check if those vectors form a basis. Depending on what you know about vector spaces, there are different ways of doing that: one possible way is to turn those vectors into columns and check if the matrix that you obtain, i.e. $\begin{bmatrix}3&amp;1&amp;2\\2&amp;2&amp;3\\2&amp;1&amp;2\end{bmatrix}$, is non-singular (i.e. has a nonzero determinant). Indeed, its determinant is $1$, which proves that $(3,2,2), (1,2,1), (2,3,2)$ is a basis. Now we know $F$ is a well-defined linear operator, you can proceed to find its kernel.
Finding covariance of $f_{(x,y)} (x,y) = \frac{1}{4}(y-x)e^{-y}$ for $-y0$
Your formula for marginal distribution should be $f_X(x) = \frac{1}{4}e^{-|x|}(|x|-x+1)$), but we don't need it. We can just use a double integral to compute all the moments from the the joint distribution: $$ \begin{aligned} E[X^m Y^n] &amp;= \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{e^{-y}}{4}x^m y^n (y-x) \left[|x| &lt; y\right] \ dx \ dy \\ &amp;= \int_{0}^\infty \int_{-y}^y \frac{e^{-y}}{4}x^m y^n (y-x)\ dx \ dy \\ &amp;= \int_{0}^\infty \frac{y^n e^{-y}}{4} \int_{-y}^y (yx^m - x^{m+1}) dx \ dy \\ &amp;= \int_{0}^\infty \frac{y^n e^{-y}}{4} \left. \left(\frac{yx^{m+1}}{m+1}-\frac{x^{m+2}}{m+2}\right) \right|_{-y}^y \ dy \\ &amp;= \left(\frac{1-(-1)^{m+1}}{m+1}-\frac{1-(-1)^{m+2}}{m+2}\right) \int_{0}^\infty \frac{y^{n+m+2} e^{-y}}{4} \ dy \\ &amp;= \begin{cases} \frac{(m+n+2)!}{2(m+1)}&amp; m \text{ even}\\ -\frac{(m+n+2)!}{2(m+2)} &amp; m \text{ odd}\end{cases} \end{aligned} $$ Where we use $\int_{0}^\infty y^{n} e^{-y} \ dy =\Gamma(n+1) = n!$. Hence $$ \begin{aligned} E[X] &amp;= -\frac{3!}{2(3)} = -1 \\ E[Y] &amp;= \frac{3!}{2(1)} = 3\\ E[XY] &amp;= -\frac{4!}{2(3)} -4 \\ \mathrm{Cov}(X,Y) &amp;= -4 - (-1)(3)=-1 \end{aligned} $$
Geometric brownian motion with more than one brownian motion term
The respective solution would become $$ X_t=X_0\exp\left[\left(\alpha-\frac{1}{2}\sigma_1^2-\frac{1}{2}\sigma_2^2\right)t+\sigma_1W_t^1+\sigma_2W_t^2\right]. $$ Note that the SDE reads $$ {\rm d}X_t=X_t\left(\alpha{\rm d}t+\sigma_1{\rm d}W_t^1+\sigma_2{\rm d}W_t^2\right). $$ This gives the quadratic variation of $X_t$ by, intuitively, \begin{align} {\rm d}\left&lt;X\right&gt;_t&amp;={\rm d}X_t{\rm d}X_t\\ &amp;=X_t^2\left(\alpha{\rm d}t+\sigma_1{\rm d}W_t^1+\sigma_2{\rm d}W_t^2\right)^2\\ &amp;=X_t^2\left(\alpha^2{\rm d}t^2+\sigma_1^2{\rm d}W_t^1{\rm d}W_t^1+\sigma_2^2{\rm d}W_t^2{\rm d}W_t^2+\right.\\ &amp;\quad\quad\left.2\alpha\sigma_1{\rm d}t{\rm d}W_t^1+2\alpha\sigma_2{\rm d}t{\rm d}W_t^2+2\sigma_1\sigma_2{\rm d}W_t^1{\rm d}W_t^2\right)\\ &amp;=X_t^2\left(\sigma_1^2{\rm d}t+\sigma_2^2{\rm d}t\right), \end{align} where the ${\rm d}W_t^1{\rm d}W_t^2$ term drops due to the independence of $W_t^1$ and $W_t^2$. Thus thanks to Ito's lemma, \begin{align} {\rm d}\log X_t&amp;=\frac{{\rm d}X_t}{X_t}-\frac{1}{2}\frac{{\rm d}\left&lt;X\right&gt;_t}{X_t^2}\\ &amp;=\left(\alpha{\rm d}t+\sigma_1{\rm d}W_t^1+\sigma_2{\rm d}W_t^2\right)-\frac{1}{2}\left(\sigma_1^2+\sigma_2^2\right){\rm d}t\\ &amp;=\left(\alpha-\frac{1}{2}\sigma_1^2-\frac{1}{2}\sigma_2^2\right){\rm d}t+\sigma_1{\rm d}W_t^1+\sigma_2{\rm d}W_t^2\\ &amp;={\rm d}\left[\left(\alpha-\frac{1}{2}\sigma_1^2-\frac{1}{2}\sigma_2^2\right)t+\sigma_1W_t^1+\sigma_2W_t^2\right]. \end{align} This, together with the initial condition, eventually yields $$ X_t=X_0\exp\left[\left(\alpha-\frac{1}{2}\sigma_1^2-\frac{1}{2}\sigma_2^2\right)t+\sigma_1W_t^1+\sigma_2W_t^2\right]. $$
Determining distribution and therefrom probability
The error is that you have have assumed that the distribution of $\frac{V_1}{V_1+V_2}$ is distributed as $F(5,14)$. This is not the case, as $V_1$ and $V_1+V_2$ are not independent (have a look at the Characterisation section in here) - on the other hand, $\frac{V_1/5}{V_2/9}$ would be distributed as $F(5,9)$. Going back to the problem at hand, we can tackle it as follows:- $$\begin{align}P\left[\frac{V_1}{V_1+V_2}&lt;b\right]&amp;=P\left[1+\frac{V_2}{V_1}&gt;\frac{1}{b}\right]\\&amp;=P\left[\frac{V_2}{V_1}&gt;\frac{1}{b}-1\right] \\&amp;=P\left[\frac{V_2/9}{V_1/5}&gt;\frac{5}{9}\left(\frac{1}{b}-1\right)\right]\\&amp;=1-P\left[\frac{V_2/9}{V_1/5}\leq\frac{5}{9}\left(\frac{1}{b}-1\right)\right]=0.9\end{align}$$ So we want to find the quantile of $F(9,5)$ for $0.1$ - from F-tables, it turns out to be about $0.3831$, so that $b$ is $$b=\frac{1}{1+(0.3831\times(9/5))}=0.592$$
Non linear first order differential equation
Starting with: $${\frac {d^{2}}{d{t}^{2}}}x \left( t \right) = x \left( t \right) ^{3}-x \left( t \right) \tag{1}$$ substitute: $$x \left( t \right) ={\frac {\sqrt{2}\,k\,y \left( t \right)}{ \sqrt{(1+{k}^{ 2})}}}$$ into $(1)$ and then rename the variable to: $$ t=T\sqrt{1+k^2}$$ and you get the equation for the Jacobi elliptic sn function : $${\frac {d^{2}}{d{T}^{2}}}y \left( T \right) -2\, y \left( T \right) ^{3}{k}^{2}+ \left( 1+{k}^{2} \right) y \left( T \right)$$ which has solution: $$y(T)=sn(T+\tau_0,k)$$ and you are free to pick the value for the elliptic modulus $k$ and starting value $\tau_0$ as these are your two free parameters for the second order differential equation. Back substitution then gives: $$x \left( t \right) ={\frac {\sqrt {2}k}{\sqrt {1+{k}^{2}}}sn\left( {\frac {t}{\sqrt { 1+{k}^{2}}}}+{\it \tau_0},k \right) }$$ You will recover an elementary function if you chose $k=1$ and you get: $$x \left( t \right) =\tanh \left( \frac{t}{\sqrt {2}}+\tau_0\right) $$ as you are aware.
The "set of all possible worlds", etc.
In modal logic we use the term "possible worlds" to describe some set of "vertices" with an accessibility relation defining "edges". Possible worlds are just a term for some set $W$ which we wish to identify as our frame in the context of Kripke semantics. When we define a valuation on that frame we obtain a model which has certain modal formulas being satisfied depending on the structure of the vertices and edges (in the graph theory sense). Formally, $\mathcal{F} = \langle W,R \rangle$ is a frame, where $R \subseteq W \times W$, and $\mathcal{M} = \langle W,R, \text{Val}\rangle$ is a model where $\text{Val}: \text{Var} \times W \rightarrow \{0,1\}$ is a valuation function which sends propositions in the set $\text{Var}$ at a world $w \in W$ to a truth value (we can also define probabilities that a modal formula is satisfied at a world by considering a valuation function with values which map to the interval $[0,1]$). Depending on the structure of the accessbility relation $R$, we can have different modal axioms satisfied in the model. For example, consider the modal axiom $B = p \rightarrow \Box \Diamond p$. If we have that $\mathcal{M}_{w} \vDash B$, $\forall w \in W$, which is read as "the model $\mathcal{M}$ makes true $B$ at all possible worlds", we say that $\mathcal{F} \vDash B$, which is that $B$ is satisfied in the frame $\mathcal{F}$. In this case, the satisfaction of $B = p \rightarrow \Box \Diamond p$ in all possible worlds ensures that the accessibility relation $R$ is symmetric, that is, $w R w' \Rightarrow w' R w, \forall w,w' \in W$. We can characterize modal frames by the satisfaction of modal axioms in this way and give an interpretation of the philosophical phrases such as "it is possible that $\varphi$" and "it is necessary that $\psi$." In summary, $W$ is just the vertex set of a graph and modal logic studies the satisfactions of modal formulas and other properties of frames and models. I should probably mention that the following are the formal definitions of possibility and necessity. $\mathcal{M}_{w} \vDash \Diamond \varphi \Leftrightarrow \exists (w,w') \in R \; | \; \mathcal{M}_{w'} \vDash \varphi$ $\mathcal{M}_{w} \vDash \Box \varphi \Leftrightarrow \forall (w,w') \in R \; | \; \mathcal{M}_{w'} \vDash \varphi$ We can define other modalities in a similar manner, thus generalizing to temporal logic, epistemic logic, and other interesting types of logic. Here is a pretty picture I made which gives an example of a model with an orientation (a directed graph upwards as this is representing temporal logic).
Probability that 3 vertices of a 2n+1 sided polygon chosen at random form vertices of an isosceles triangle
Choose the first vertex at random(it is indistinguishable from the other vertices). Next, we drop an axis of symmetry at the point. For each of the remaining $2n$ points, it suffices to pick a single vertex() from one side of the axis of symmetry(the other vertex will be determined my reflecting across the axis of symmetry). The number of ways to choose 2 points at random frorm the remaininig points is $\binom{2n}{2},$ which gives an overall probability of $$\boxed{\frac{n}{\binom{2n}{2}}}.$$
Proof of a corollary for the Strong Law of Large Number
Expanding a bit on my comment: replace $X_i$ by $X_i-\mathbb{E}X_i$. Let $X$ be distributed as the $(X_i)_{i\geq 1}$ and $Y_i=c_iX_i\mathbf{1}_{\lvert c_iX_i\rvert\leq i}$. By dominating convergence, $\lim\mathbb{E}Y_i=0$. Denoting $M=\sup_{i\geq 1}\lvert c_i\rvert$, $$\sum_{i=1}^\infty\mathbb{P}(c_iX_i\neq Y_i)=\sum_{i=1}^\infty\mathbb{P}(\lvert c_iX\rvert&gt;i)\leq\sum_{i=1}^n\mathbb{P}(M\lvert X\rvert&gt;i)\leq\int_0^\infty\mathbb{P}(M\lvert X\rvert&gt;t)dt=\mathbb{E}M\lvert X\rvert&lt;\infty$$ So the Borel-Cantelli lemma applies and a.s., there exists $J\geq 1$ such that for all $i\geq J$ the equality $c_iX_i=Y_i$ holds. Finally, $$\sum_{i=1}^\infty\frac{\operatorname{Var}(Y_i)}{i^2}\leq\sum_{i=1}^n\frac{\mathbb{E}Y_i^2}{i^2}=\mathbb{E}\left[\lvert X\rvert^2\sum_{i=1}^\infty\frac{\mathbf{1}_{\lvert c_iX\rvert&gt;i}}{i^2}\right]\leq c\mathbb{E}\lvert X\rvert&lt;\infty$$ for some constant $c\geq 0$. We can now apply the following lemma with $Z_n=Y_n-\mathbb{E}Y_n$. Let $(Z_n)_{n\geq 1}$ be a sequence of independent r.v. with $\mathbb{E}Z_n=0$ and $\sum_{n\geq 1}\operatorname{Var}(Z_n)/n^2&lt;\infty$. Denoting $S_n=Z_1+...+Z_n$, the sequence $(S_n/n)_{n\geq 1}$ converges a.s. to $0$. Since $(\mathbb{E}Y_i)_{i\geq 1}$ converges to $0$ $$\frac{Y_1+...+Y_n}{n}=\frac{Z_1+...+Z_n}{n}+\frac{\mathbb{E}Y_1+...+\mathbb{E}Y_n}{n}$$ converges to $0$ as well a.s. Finally, a.s., for all $n\geq J$, $$\frac{c_1X_1+...+c_nX_n}{n}=\frac{c_1X_1+...+c_{J-1}X_{J-1}}{n}+\frac{Y_J+...+Y_n}{n}\longrightarrow 0$$
Please help my incorrect thinking about $(ijk)^2$ regarding quarternions
$(ijk)^2\neq i^2j^2k^2$ because $i,j$, and $k$ do not commute. Indeed, $(ijk)^2=(-1)^2=1$, while $i^2j^2k^2=(-1)(-1)(-1)=-1$.
Proving Cographic matroid is indeed a matroid
For convenience, let $n = |V|$. Using $n-1$ edges to connect the $n$ nodes, you get a tree (connected + no circles). Because $A \in I$, that means that $(V, E \text{ \ } A)$ is a connected graph. Therefore $|E \text{ \ } A| \ge n-1$ (otherwise the graph $(V, E \text{ \ } A)$ would not be connected). $$|B| &lt; |A| \Rightarrow |E \text { \ } B| &gt; | E \text { \ } A |$$ Because of that, $|E \text{ \ } B| \ge n$, which means that there is at least one circle in the graph $(V, E \text{ \ } B)$. Since both $(V, E \text{ \ } A)$ and $(V, E \text{ \ } B)$ are connected graphs, there has to be one edge on a circle in $(V, E \text{ \ } B)$, which is not used in the graph $(G, E \text{ \ } A)$: $$\exists e \in (E \text{ \ } B): e \notin (E \text{ \ } A) \\ \Leftrightarrow \exists e \in E: (e \notin B) \wedge (e \in A) \\ \Leftrightarrow \exists e \in (A \text{ \ } B) $$ By putting this edge $e$ into $B$, we remove an edge on a circle in $(V, E \text{ \ } B)$, without disconnecting the graph. Hence $(B \cup \left\lbrace e \right\rbrace) \in I$. $\square$
How to determine if a pixel is inside a contour?
Assuming that the contour doesn’t have any loops, shoot a ray in some convenient direction from the pixel to the edge of the image and count the number of times that it crosses the contour: an odd number of crossings means that it’s inside, an even number, outside. You’ll need to be a bit careful about how you count when the ray coincides with the contour for a while—there might not be an actual crossing in that case.
$M$ is row strict diagonally dominant matrix and Hurwitz. What about $M+M^T$?
This is NOT true. I will give you a counterexample: $M=\begin{bmatrix}-1.01 &amp; 1 \\2&amp; -2.01 \end{bmatrix}, P=M+M^T=\begin{bmatrix}-2.02 &amp; 3 \\3 &amp; -4.02 \end{bmatrix}$. Using matlab command eig I found out that the eigenvalues of $M$ are $-0.01$ and $-3.01$ while those of $P$ are $-6.1823$ and $0.1423$. It is noted that $M$ is strictly diagonally dominant and Hurwitz as requried in the question. However, the second eigenvalue of $P$ is positive and therefore proves that your proposition is false. Actually, there are many counterexamples to this questions. BTW: The definitions of diagonally dominant and strictly diagonally dominant are as follows [*]: A matrix $A=[a_{ij}]\in M_n$ is diagonally dominant if \begin{equation} |a_{ij}| \ge \sum_{j \ne i}|a_{ij}|=R_i^{'} ~~for~all &gt; ~i=1,..,n \end{equation} It is strictly diagonally dominant if \begin{equation} |a_{ij}| &gt; \sum_{j \ne i}|a_{ij}|=R_i^{'} ~~for~all &gt; ~i=1,..,n \end{equation} Base on this, your question is not accurate enough. You should change the condition to strictly diagonally dominant instead. [*]: Horn, Roger A., and Charles R. Johnson. Matrix analysis. Cambridge university press, 2012.
Ring Theory (Local Rings)
HINT for $\Leftarrow$: Do the contrapositive; if $\mathfrak{m}_1$ and $\mathfrak{m}_2$ are two distinct maximal ideals of $R$, then look at the left $R$-module $R/(\mathfrak{m}_1\cap\mathfrak{m}_2)$; it is principal (generated by $1+(\mathfrak{m}_1\cap\mathfrak{m}_2)$. Is it indecomposable? HINT for $\Rightarrow$: Suppose $M\neq\{0\}$ is principal, and $M=N_1\times N_2$. Let $m=(n_1,n_2)$ generate $M$. Then there are $a,b\in R$ such that $am = (n_1,0)$ and $bm=(0,n_2)$; so $(a+b)m = m$. Then $a+b-1\in\mathrm{Ann}(m)$. Can $a$ and $b$ both lie in the maximal ideal of $R$? What does that mean?
$f, g: \mathbb{R} \to \mathbb{R}$ and $f(x+h) = f(x) + g(x)h + a(x,h)$ for $|a(x,h)| \leq Ch^3$. Show that $f$ is affine.
Enough to show $\forall r,f(r)=f(0)+rg(0)$. First, let me simply the question a bit. If there exist $f,g$ satisfying the condition and $f$ is nonaffine with $f(0)=a$ and $f(1)=a+b$, we may consider $f^*(x)=\frac{f(x)-a}{b},g^*(x)=\frac{g(x)}{b}$, where $f^*$ is not affine and $(f^*,g^*)$ also satisfies the condition. Furthermore, if there exist such $f,g$ s.t. $f(r)\neq 0$, we may consider $f^*:x\mapsto f(rx)$ and $g^*:x\mapsto g(rx)$ as counterexample to $f(1)=0$. Thus, it suffices to show $f(1)=0$. Consider positive integer $N$. $|f(\frac{1}{N})-f(0)|=|f(\frac{1}{N})|=|a(0,\frac{1}{N})|\le \frac{C}{N^3}$ $\forall l,\forall k,f(\frac{k+l}{N})=\frac{k}{N}g(\frac{l}{N})+a(\frac{l}{N},\frac{k}{N})\Rightarrow |f(\frac{l+1}{N})+f(\frac{l-1}{N})-2f(\frac{l}{N})|\le \frac{2C}{N^3}$ $\Rightarrow\forall l\in\mathbb{Z}_{\ge 0},|f(\frac{l+1}{N})-f(\frac{l}{N})|\le C\cdot\frac{2l+1}{N^3}$ $\Rightarrow\forall l\in\mathbb{Z}_{\ge 0},|f(\frac{l}{N})|\le C\cdot\frac{l^2}{N^3}$ $\Rightarrow |f(1)|\le\frac{C}{N}$ Since this works for every $N$, $f(1)=0$ and so we are done.
Hints with this Integration Problem
This may be done by recognizing that $$\frac{x \cos{x}-\sin{x}}{x^2} = \frac{d}{dx} \frac{\sin{x}}{x} = \frac{d}{dx} \int_0^1 du \, \cos{x u} = -\int_0^1 du \, u \sin{x u}$$ Thus the integral is then equal to $$-\int_0^{\infty} dx \frac{\cos{(x/2)}}{x} \, \int_0^1 du \, u \sin{x u}$$ We can reverse the order of integration to get $$-\int_0^1 du \, u \, \int_0^{\infty} dx \frac{\sin{x u}}{x} \cos{(x/2)}$$ which looks like a Fourier transform: $$\int_0^{\infty} dx \frac{\sin{x u}}{x} \cos{(x/2)} = \frac12 \int_{-\infty}^{\infty} dx \, \frac{\sin{x u}}{x} e^{i k x}$$ where $k = \frac1{2}$. Thus, the integral is simply $\pi$ when $|u|&gt;1/2$ and zero elsewhere. Thus the integral is equal to $$-\frac{\pi}{2} \int_{1/2}^1 du \, u = -\frac{3 \pi}{16}$$
Show that $f(x,y)=\frac{1}{y}$ is differentiable
$$\frac{|y-y_0|\left|\frac{1}{y_0^2}-\frac{1}{y_0y}\right|}{\sqrt{(x-x_0)^2+(y-y_0)^2}}\leq \frac{|y-y_0|\left|\frac{1}{y_0^2}-\frac{1}{y_0y}\right|}{|y-y_0|}=\left|\frac{1}{y_0^2}-\frac{1}{y_0y}\right|\rightarrow 0$$ as $(x,y)\rightarrow (x,y_0),$ provided $y_0\neq 0.$ Now, just use the squeeze theorem.
Can we see the beta coefficients in OLS as mean values?
No, since the coefficient is the slope of line which minimizes the sum of squared residuals. In that sense it cannot be interpreted as the as the mean, you should rather interpret it as the marginal impact of X on Y.
What is $\int_0^0 \frac{\sin t}{t}dt$ and why?
There are two ways to “solve” the issue. Method 1. We’re integrating the function $$ \operatorname{sinc}x=\begin{cases} \dfrac{\sin x}{x} &amp; x\ne0 \\[6px] 1 &amp; x=0 \end{cases} $$ but using a sloppy notation in the integral. Method 2. We’re defining $$\operatorname{Si}x=\int_0^x \frac{\sin t}{t}\,dt$$ for $x\ne0$ and extending the function by continuity with $\operatorname{Si}0=0$. Both methods yield the same function, of course. In either case, something is left to the reader to fill in.
Does $\pi$ consist of $\pi$ in it?
Note, that if we construct a number in a way you've mentioned, we will obtain a repeater fraction. The numbers, that can be represented as a repeter fraction are rationals, while $\pi$ is not. $$x10^n = 10 \lfloor10^{n-1}x\rfloor+x$$ $$x= \frac{10 \lfloor10^{n-1}x\rfloor}{10^n-1}\in\mathbb{Q}$$ Thus we could say, that for every non-rational number $x$ (eg. $\pi$) there is no $n\in\mathbb{N}$ , such that $x$ satisfies the first equation.
Showing that localization is an exact functor
It is enough to prove it preserves short exact sequences: $\;0\to M\to N\to P\to 0$. As the tensor product is right-exact, and $S^{-1}M\simeq M\otimes_A S^{-1}A$, it is even enough to prove it preserves injectivity. So consider an injective morphism $\varphi\colon M\to N$ and suppose $\;(S^{-1}\varphi)\Bigl(\dfrac ms\Bigr)=0$ in $S^{-1}N$. This means there exists $t\in S$ such that $\;t\mkern1mu\varphi(m)=\varphi(tm)=0$. But then $$\frac ms=\frac{tm}{ts}=\frac0{ts}=0,$$ which shows $\;S^{-1}\varphi\;$ is injective.
Complement of a bounded set $B$ in $\mathbb{R}^{n}$ has exactly one unbounded component.
If $B$ is bounded then there is a ball $S$ of some radius $r$ and $B\subseteq S$. Therefore, $\neg S\subseteq \neg B$ and since $\neg S$ is connected (for $n\geq 2$), $\neg S$ lies within one component of $\neg B$. All other components of $\neg B$ (if exist) must be inside $S$ and bounded.
A formula for the smallest k such that n^k > n!
You may solve $$ n^x&gt;n! $$ giving $$ \begin{align} x \ln n&gt; \ln (n!) \end{align} $$ and, for $n=2,3,\ldots$, $$ x&gt;\frac{\ln (n!)}{\ln n}. $$ Then $$ k:=k_n=\left\lceil\frac{\ln (n!)}{\ln n}\right\rceil. $$ As $n \to \infty$, this may be approximated by using Stirling's formula.
Mean curvature flow - implementation fails for some meshes
Finally I wrote a bit about mean curvature and presented problem. You can find everything on my page here. If something is written badly I will be grateful for any feedback. Happy math!
Inverse linear transformation from $P_1(\Bbb R)→P_1(\Bbb R)$ question
Hint: Write $T^{-1}(p)=cx+d$ for every $p\in P_1 (\Bbb{R})$ and try to calculate $T^{-1}\circ T$.
Equation of chord of a parabola whose midpoint is given
You made a typo here. It should be $$ yy_1-2a(x+x_1)=y_1^2-4ax_1 $$ without the $=0$ at the end. The point $(x_1,y_1)$ does not lie on the parabola $y^2=4ax$. Instead, it is tangent to a shifted parabola $y^2-4ax=y_1^2-4ax_1$ in order to be the midpoint of a chord. The foci of this shifted parabola is $(a+x_1-y^2/(4a),0)$. Let's get back to proving the equation of chord. Suppose $(as^2,2as)$ and $(at^2, 2at)$ are two points on the parabola $y^2=4ax$. The mid-point of the chord is $$ \left(a\frac{s^2+t^2}{2},a(s+t)\right)=:(x_1,y_1) $$ and the equation of chord is $$ \frac{y-y_1}{x-x_1}=\frac{2a(t-s)}{a(t^2-s^2)}=\frac{2a}{a(t+s)}=\frac{2a}{y_1}. $$ So $$ yy_1-y_1^2=2a(x-x_1), $$ or equivalently $$ yy_1-2a(x+x_1)=y_1^2-4ax_1, $$ as claimed.
Conditional distribution from joint PMF
It should be pmf (probability mass function). We have $\Pr(X=0\mid Y=0)=\frac{0.5}{0.5+0.2}$ and $\Pr(X=1\mid Y=0)=\frac{0.2}{0.5+0.2}$. You had the ratio right, but the conditional probabilities have to have sum $1$.
f is a absolutely continuous function on $[a,b]$ prove that $①\int_a^b\vert f'(t) \vert dt=V_{a}^b $
Maybe I got everything wrong but you've said that you can prove that $$\int_a^b\vert f'(t) \vert dt \le V_{a}^b(f)$$ If it is so, the opposite inequality is quite simple: let the $x_0=a&lt;x_1&lt;x_2&lt;...&lt;x_n=b$ is a partition of the segment $[a;b]$ Then $$\sum_{k=0}^{n-1} |f(x_{k+1})-f(x_{k})| = \sum_{k=0}^{n-1}|\int_{x_k}^{x_{k+1}} f'(t)dt| \le \sum_{k=0}^{n-1}\int_{x_k}^{x_{k+1}} |f'(t)|dt = \int_a^b |f'(t)|dt$$ And taking a supremum of both parts gives you $$ V_{a}^b(f) \le \int_a^b\vert f'(t) \vert dt $$
Banach space with subset whose elements are at least $d\gt 0$ far from each other is not separable
$X$ is separable iff it contains a dense countable subset. If $A$ if an additive subgroup with points separated more than $d$, then the open sets $(U_a)_{a \in A}$ given by $$ U_a = B_a(d/3), $$ Form an uncountable family of disjoint open sets. Any dense set must intersect each of them, but by disjointedness this would imply that the dense set has uncountable cardinality.
Show that $\lim_{n\to\infty}\sum_{k=1}^n\bigl|k\bigl(f\bigl(\frac{1}{k}\bigr)-f\bigl(-\frac{1}{k}\bigr)\bigr)-2f'(0)\bigr|$ exists
We have using Taylor expansion: $$f\left(\frac1k\right)=f(0)+\frac1kf'(0)+\frac1{2k^2}f''(0)+O\left(\frac1{k^3}\right)$$ and similarly $$f\left(-\frac1k\right)=f(0)-\frac1kf'(0)+\frac1{2k^2}f''(0)+O\left(\frac1{k^3}\right)$$ hence we get $$f\left(\frac1k\right)-f\left(-\frac1k\right)=\frac2kf'(0)+O\left(\frac1{k^3}\right)$$ so the given sum is $$\sum_{k=1}^n\left|O\left(\frac1{k^2}\right)\right|\le M\sum_{k=1}^\infty\frac1{k^2}=\frac{M\pi^2}{2}$$ so the given sequence is increasing and bounded above so convergent.
The metric comes from a norm if and only if satisfies the following:
Intuitively the norm of something is its distance from $0$. So if you want a norm coming from a distance, expect $||x||=d(x,0)$.
decompose into a direct sum of irreps
Swapping the second and third rows, and second and third columns changes your matrix to $\pmatrix{ 0 &amp; -1 &amp; 0 \\ 1 &amp; 0 &amp; 0 \\ 0&amp; 0 &amp; 1} $. Can you now see how to decompose your representation?
Prove that if $\gcd(m,n)=1$ then every divisor $d|mn$ has a unique form $d=ab$ such that $a|n$ and $b|m$.
Your proof is correct. For existence, observe that $d=(d,mn)$ and since $(m,n)=1$ you know $d=(d,mn)=(d,m)(d,n)$. By definition of $\gcd$ we have $(d,m)\mid m$ and $(d,n)\mid n$.
Finding a minimal number of charging stops along the route
Let $f:\mathbb{N}\times \mathbb{N}\to \mathbb{N}$ a function. $f(i,j)$ with $i,j \in \mathbb{N}$ means the minimum number of stops when we are in station $i$ and we have $j$ kilometres available in our tank. If $n$ is the number of stations and $D_i$ is the distance from Toronto to station $i$ then when we are in an station we can choose do not stop or stop for charge the car, obviously we want to choose the option which minimizes the number of stops. Therefore $$f(i,j) = \left\{ {\begin{array}{*{20}{c}} {1 + f(i + 1,X - \Delta {D_i})} &amp; {} &amp; {i \le n,j &lt; \Delta {D_i}} \\ {\min (1 + f(i + 1,X - \Delta {D_i}),f(i + 1,j - \Delta {D_i})))} &amp; {} &amp; {i \le n,j \ge \Delta {D_i}} \\ 0 &amp; {} &amp; {i = n + 1} \\ \end{array}} \right. $$ where $\Delta D_i=D_{i+1}-D_i$ for all $i \in \{1,\cdots n\}$ and we set $D_{n+1}$ as the distance from Toronto to Vancouver. We assume that $X \ge \Delta D_i$ for all $i$, i.e. we can go from one station to next one. Now our answer will be $f(1,X-D_1)$. If you use dynamic programming technique you can compute this in $O(nX)$.
Finding whether $\lim\sup x_n >0$
The supremum of a set of positive integers is positive, clearly because the supremum is an upper bound, so is greater than all the elements of the set, which are positive numbers, so it positive. However, here, we are taking a limit of supremums. Even if all the supremums are positive, the limit of the supremums need not be. For example, consider $a_n = \frac 1n$. It is a strictly decreasing sequence i.e. $a_m &lt; a_n$ for all $m &gt; n$, so it follows that $\sup \{a_m : m &gt; n\} = a_n$ for all $n \in \mathbb N$. But, $$\limsup a_n = \lim_{n \to \infty} \sup\{a_m : m &gt; n\} = \lim_{n \to \infty} a_n = \lim_{n \to \infty} \frac 1n = 0$$ Therefore the statement, made as before, is false. But, what is true, is that $\limsup a_n $ $\color{grey}{\geq}$ $0$. To see this, it is actually enough to see that the limit of a sequence of positive quantities can only be non-negative, proved using a common $\epsilon-\delta$ argument.
Prove that $F$ is continous
Fix $y \in \mathbb{R}$. Note that $f$ is uniformly continuous in the compact set $K:= [0,1] \times [y-1,y+1]$. Now, let $\varepsilon &gt; 0$. By uniform continuity in $K$, there exists $\delta &gt; 0$ such that if $\|(x,y)-(x',y')\| &lt; \delta$ and $(x,y),(x',y') \in K$ then $|f(x,y)-f(x',y')| &lt; \varepsilon/2$. Without loss of generality, we can additionally assume $\delta &lt; 1$. If $|y-y'| &lt; \delta$, then we get that $\|(x,y)-(x,y')\| &lt; \delta$ for each $x \in [0,1]$. Moreover it has to be $(x,y),(x,y') \in K$ because $|y-y'| &lt; \delta &lt; 1$. This says that $$ |f(x,y)-f(x,y')| &lt; \varepsilon/2 \quad (\forall y' \in B_\delta(y), x \in [0,1]) $$ and thus $$ |F(y)-F(y')| \leq \int_0^1|f(x,y)-f(x,y')|d\mu \leq \int_0^1\varepsilon/2 \cdot d\mu = \varepsilon/2 &lt; \varepsilon. $$ which concludes the proof: for a given $\varepsilon$, we have found $\delta &gt; 0$ such that if $|y-y'| &lt; \delta$, then $|F(y)-F(y')| &lt; \varepsilon$.
Geometric/Visual Solution - Shortest Vector for which Dot Product = x + 2y = 5. (Strang P21 1.2.26)
To answer your first question: $$ \mathbf{v \cdot w} = 5 \Longrightarrow \mathbf{v} \cdot (k\mathbf{v}) = 5 \Longrightarrow k \|\mathbf{v}\|^2 = 5 \Longrightarrow k=1 $$ Regarding the second question ... By definition, we have $\mathbf{v \cdot w} = \|\mathbf{v}\| \cdot \|\mathbf{w}\| \cos\theta$, where $\theta$ is the angle between $\mathbf{v}$ and $\mathbf{w}$. But we know that $\mathbf{v \cdot w} = 5$, and $\|\mathbf{v}\| = \sqrt 5$, so $$ \|\mathbf{w}\|\, \cos\theta = 5/{\sqrt 5} = \sqrt 5. $$ Now $\|\mathbf{w}\| \cos\theta$ is the length of the projection of $\mathbf{w}$ onto $\mathbf{v}$. So, we are looking for the shortest possible vector $\mathbf{w}$ whose projected length along $\mathbf{v}$ is $\sqrt 5$. It's obvious (I think) that the shortest vector giving the desired projected length must be parallel to $\mathbf{v}$. In other words, if the length of $\mathbf{w}$ is to be minimized, $\mathbf{w}$ must be a scalar multiple of $\mathbf{v}$, say $\mathbf{w} = k\mathbf{v}$. $\color{green}{\bigstar}$ But $\|\mathbf{v}\| = \sqrt 5$, so, to get a projected length of $\sqrt 5$, the scalar multiplier $k$ must be 1. So $\mathbf{w} = \mathbf{v}$. Alternative methodology after $\color{green}{\bigstar}$: But $\mathbf{w} = k\mathbf{v} \Longrightarrow \|\mathbf{w}\|\ = k\|\mathbf{v}\|$, where $\|\mathbf{w}\|\ \geq 0 \Longrightarrow LHS \geq 0 $ as well. So for the shortest $\mathbf{w}$ such that $\mathbf{v \cdot w} = 5$ , we must choose $k = 1$. Finally, because we already determined in the anterior paragraph $\mathbf{w} = k\mathbf{v},$ thus $ \mathbf{w} = \mathbf{v}.$
Clever way of showing the map sending to prime power is outer automorphism
The only inner automorphism of an abelian group is the identity map!
The one value of super square root function
I would use the function $$ \operatorname{ssqrt}(x) = \exp( W( \log( x ))) $$ where $W$ is the Lambert-W (at its zero'th branch) and $x$ is in the range $ \exp(-\exp(-1))\approx 0.6922 \ldots 1$ as reference/as principal value. This gives for $x= 1/2^{1/2}\approx 0.707 $ $$ \operatorname{ssqrt}(x) = 0.5 =1/2 $$ ... for $x= 1/3^{1/3}\approx 0.693 $ $$ \operatorname{ssqrt}(x) \approx 0.403542672016 \approx 1/2.478 $$ ... for $x= 1/4^{1/4}\approx 0.707 $ $$ \operatorname{ssqrt}(x) = 0.5 =1/2 $$ and so on
Ways to number a row of trees
Hint: disregarding the requirement for $3$ to be used at least twice, there would be $3 \cdot 2^{n-1}$ ways. Now subtract the number of ways with $0$ or $1$ $3$'s. Each configuration with no $3$'s consists of alternating $1$'s and $2$'s. Each configuration with one $3$ consists either of a $3$ followed by alternating $1$'s and $2$'s, or alternating $1$'s and $2$'s followed by a $3$ and then (if there are some trees left) alternating $1$'s and $2$'s.
Equations in Landau/ Big O Notation
As you correctly said, $O(f(x))$ is a whole class of functions, and A = O(B) actually means A \in O(B). So actually your first equations says $-(x^2+x) \in O(\epsilon^2)$. But this implies $x^2 + x \in O(\epsilon^2)$ and therefore $x^2 + x = O(\epsilon ^2)$.
$x^3+y^4=7$ has no integer solutions
Consider the equation modulo $13$. Then $x^3$ can be $0,1,5,8,12$ and $y^4$ can be $0,1,3,9$. None of these add to $7$ modulo $13$. I chose $13$ because $3|\phi(13)$ and $4|\phi(13)$, so I could get restrictions on both $x^3$ and $y^4$.
Direct product for minimal normal subgroup
Follow the comments, we assume $S_i$'s are non-abelian simple groups. Next we will show that the $S_i$ are the only minimal normal subgroups of $N$. Suppose there is some other minimal normal subgroup $H$, then we have $[S_i : H]$ is a normal subgroup of $S_i$, to see this, let $c\in [S_i, H]$ then we know because $H$ is normal $c\in H$, for each $s\in S_i$, we have $$scs^{-1}c^{-1} \in [S_i, H]$$ and by clousure of group multiplication, $$scs^{-1}c^{-1}c \in [S_i, H].$$ Furthermore because $S_i$'s are simple, so $[S_i, H]$ must be $1$. This implies $H$ commutes with all $S_i$'s thus $H \subset Z(N)$. But since $Z(S_i) = 1$, we see that $Z(N) = 1 = H$. So the only minimal normal subgroups of $N$ are $S_i$'s. And finally $g\in G$, conjugation of $g\in G$ defines a group isomorphism from $N$ to $N$. Thus $S_i$ will be mapped to some minimal simple group $S_j$ for some $j$.
Find CDF of constant derived random variable
Clearly $\mathsf P(Y{=}y)=\begin{cases}\mathsf P(X{&lt;}0)&amp;:&amp; y=0\\\mathsf P(X{\geq}0)&amp;:&amp; y=100\\0&amp;:&amp; \textsf{elsewhere}\end{cases}$ Well, indeed, this will not have a proper probability density function, as its support consists of two massive points. However, that did not prevent you from denoting a probability density function for $X$ which had one massive point at $x=0$ , denoted using the delta function times the point's probability mass of $1/3$. So ... use two delta functions and the appropriate probability masses.
How can I calculate this Riemann sum?
$$\lim_{n\to\infty} \sum_{k=1}^{n-1}\frac{e^{\frac{k}{n}}}{n}=\int_0^1e^xdx=e-1$$
Parallelizability of 2-manifolds
An orientable 2-dimensional manifold can be embedded into $\Bbb R^3$ so that it is a submanifold (see e.g. Wikipedia, or this question for the more trick non-compact cases). A submanifold $M$ of $\Bbb R^3$ has a well-defined normal vector $n(p)$ which depends continuously on $p\in M$. The cross product $n(p)\times X(p)$ then also depends continuously on $p\in M$, is linearly independent to $X(p)$ and still lies in the tangent space. Therefore, it provides the desired second vector field.
Tree with a size greater than n -1
When you add the degrees of all the vertices together, you end up counting each edge twice: an edge $vw$ contributes $1$ to the degree of $v$ and $1$ to the degree of $w$. So if your tree has $35$ vertices (and therefore $34$ edges) the sum of all degrees should be $34 \cdot 2 = 68$.
Order of integration problem in probability
This is a simple application of indicator functions technique and Fubini's theorem $$ \begin{align} \int\limits_0^\infty \int\limits_{\{x : g(x) &gt; t\}} f_X(x)dxdt &amp;= \int\limits_0^\infty \int\limits_{-\infty}^\infty f_X(x) 1_{\{(x,t) : g(x) &gt; t\}}dxdt \\ &amp;=\int\limits_{-\infty}^\infty \int\limits_0^\infty f_X(x) 1_{\{(x,t) : g(x) &gt; t\}}dtdx \\ &amp;=\int\limits_{-\infty}^\infty \int\limits_0^\infty f_X(x) 1_{\{(x,t) : g(x) &gt; t\geq 0\}}dtdx \\&amp;=\int\limits_{-\infty}^\infty \int\limits_{\{t : 0\leq t &lt; g(x)\geq 0\}} f_X(x) dtdx \end{align} $$
Extremely trivial integral $\frac{1}{\tau}\int_0^\tau \mathrm{e}^{\mathrm{i} (n - m) t} \mathrm{d}t = \delta_{nm}$
It doesn't work, but if you were integrating over the interval $(-\pi,\pi)$, then it would, due to the periodicity of the trig functions. I think this must be what is meant. Greg
Well defined function meaning
One example of a not-well-defined function would be $f(\frac{a}{b})=a+b.$ Then, while $\frac{1}{2} = \frac{2}{4}$, we have $f(\frac{1}{2}) = 3$ but $f(\frac{2}{4})=6.$ So you'll often see, in proofs about rational numbers, that the fractions is specified to be "in lowest terms" and maybe "with denominator positive", so that the operations given are well-defined.
Integral $\int {1\over x^2+8x-3}\quad dx$
This quadratic has two real roots, $x = -4\pm\sqrt{19}$, so this is better handled with partial fractions: $$ \int \frac{dx}{x^2 + 8x -3} = \int\left[\frac{1}{x+4-\sqrt{19}}-\frac{1}{x+4+\sqrt{19}}\right]\frac{dx}{2\sqrt{19}} = \frac{1}{2\sqrt{19}}\ln\left|\frac{x+4-\sqrt{19}}{x+4+\sqrt{19}}\right| $$
Proof Verification: C closed, convex, symmetric in Banach space X and $\cup_{n \in N \setminus 0} n.C= X$ then $B_\epsilon(0) \in C $.
In order to invoke Baire you have to assume the closedness of $C$. Otherwise your proof is fine.
To show that Fermat number $F_{5}$ is divisible by $641$.
If you know at least a few basic facts about congruences, it's not difficult to do this by hand, e.g. as follows: $\newcommand{\kong}[3]{{#1}\equiv{#2}\pmod{#3}}$ $\kong{2^8}{256}{641}$ $2^{16} \equiv 256^2= 64\cdot4\cdot256 =1024\cdot64=102\cdot640+256 \equiv \kong{256-102}{154}{641}$ $2^{32} \equiv 154^2 = 14^2\cdot11^2 = 196\cdot121 = (3\cdot64+4)(2\cdot64-7)= 6\cdot64^2+8\cdot64-21\cdot64-28=(384+8-21)\cdot64-28 = 371\cdot64-28 = 37\cdot640+64-28\equiv \kong{-37+36}{-1}{641}$ The last line means that $641\mid F_5=2^{32}+1$. We have just used $\kong{640}{-1}{641}$ a few times. Hardy and Wright give in their book a different argument, see p.18. If we notice that $$641=5^4+2^4=5\cdot 2^7+1$$ then we have that $$641\mid 5^4\cdot2^{28}+2^{32}$$ (we multiplied the first expression by $2^{28}$) and we also have $$641\mid 5^4\cdot2^{28}-1$$ if we use $x+1\mid x^4-1$ for $x=5\cdot2^7$. If we subtract the above numbers, we get $$641\mid 2^{32}+1=F_5$$ They attribute this proof to Coxeter, see p.27.
Consider the subspaces U = span{2 + x, 1 − x^2}, and W = span{−1 + x + x^2 , −x + x^2} of P2(R). Find U ∩ W
If your computation is correct, i.e. $b = -5a, c = 3a, d = 2a$ holds, then you simply insert the result in the description of $U$ or $W$. That is, using the description of $U$, the intersection $U \cap W$ can be expressed as the set $$ \{ a(2 + x) + -5a(1 − x^2) | a \in \mathbb{R} \} = \{ a(5x^2 + x - 3) | a \in \mathbb{R} \} $$ or, equivalently, using the description of $W$ $$ \{ c(−1 + x + x^2) + d(−x + x^2) \} = \{ 3a(−1 + x + x^2) + 2a(−x + x^2) | a \in \mathbb{R} \} $$ which is again the same descrption as the one given above, i.e. $$ \{ a (5x^2 + x - 3 ) | a \in \mathbb{R} \} \ . $$
Is dot product the only rotation invariant function?
$$\vert x\times y\vert$$ is invariant under rotations because vector modulus is. It uses the dot product, but it is not a scalar function of the dot product of its arguments. $$f(x,y)=\vert x\times y\vert=\sqrt{(x·x)(y·y)-(x·y)^2}\ne g(x·y)$$
How does the area of $A+A$ compare to the area of $A-A$?
Yes. By the Brunn–Minkowski inequality, $|A-A|^{1/2}\geq|A|^{1/2}+|-A|^{1/2}$, so $|A-A|\geq 4|A|$. The hypotheses of this theorem do not include convexity, although they do include compactness. On the other hand, under the assumption that $A$ is convex, $A+A = 2A$, and $|A+A|=4|A|$. Therefore $|A-A|\geq |A+A|$.
martingales, almost sure convergence
You can use the Borel-Cantelli lemma to deduce the first part, i.e. $$\sum_{n=1}^\infty\mathbb{P}(X_n=-n^2)&lt;\infty.$$
Is it possible to find the partial sum?
We have: $$ S_N = \sum_{n=0}^{N}\frac{3^{n+1}}{1+2^{n+1}} = \sum_{n=0}^{N}\left(\frac{3}{2}\right)^{n+1}\frac{1}{1+\frac{1}{2^{n+1}}}=\sum_{n=0}^{N}\left(\frac{3}{2}\right)^{n+1}\sum_{m\geq 0}\frac{(-1)^m}{2^{(n+1)m}}$$ hence by exchanging the sum on $m$ and the sum on $n$, $$ S_N = 3\sum_{m\geq 0}(-1)^{m}\frac{2^{(m+1)(N+1)}-3^{N+1}}{(2^{m+1}-3)2^{(m+1)(N+1)}}$$ the last series converges pretty fast for any $N$, but I won't bet on a simple closed form but in terms of the Hurwitz zeta function or something related.
Obtain the location of point E on side AB such that ACDE is a trapezium
Hint: The tringle $ABC$ is half an equlateral triangle because $\frac{\overline{AB}}{\overline{AC}}=\frac{1}{2}=\cos (\angle CAB)$. Look at the figure and you can easily solve the problem.
Exercise of interior of a closed ball
Let $x$ be an interior element of the closed ball. Then there is some $\delta &gt; 0$ so that $B(x, \delta)$ lies in the closed ball. Every element of $B(x, \delta)$ lies in the open ball. You can use the triangle inequality to see this.
Graphing $af(bx-c)+d$?
If $c/b$ is positive, then subtracting it inside will move it to the right. Think about it this way: $$f(x)\to f(x-2)$$ For it to be the same, $x$ must be $2$ units larger to cancel the $-2$. Similarly, imagine the following: $$f(x)\to f(2x)$$ For it to stay the same, $x$ must be half the original size, hence, we divide by $b$.
how to solve this equation using logarithm, if not possible how to solve it?
Newton's method is very efficient for this sort of equation, especially when you have a good initial estimate. It's easy to guess $x\approx 0.2$ to begin. So, using Newton's iteration $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)},$$with $$f(x)=0.2948\left(1-\frac1{(1+x)^5}\right)-x,\quad\quad f'(x)=\frac{1.474}{(1+x)^6}-1,$$and $x_0=0.2$, we can successively compute $x_1,x_2,$ and so on. Just two iterations will be enough to give good accuracy, and three is plenty.
Proof of equivalence classes constituting a partition.
We want to prove $e\in[a]\Rightarrow e\in[c]$ If $e\in[a]$ then we have $a\sim e$, and by symmetry $e\sim a$. On the other hand we have $d\in[a]\cap[c]$ by hypothesis, so $a\sim d$ and $c\sim d$, and this last one implies $d\sim c$ by symmetry. Since we have $e\sim a$ and $a\sim d$ then we have $e\sim d$ by transitivity. Now we have $e\sim d$ and $d\sim c$, so by transitivity again $e\sim c$, and by symmetry $c\sim e$, which means $e\in[c]$ as we wanted to show.
Hint Needed: Proving $\sqrt{2}$ is irrational using induction
You may or may not be familiar with the typical proof by contradiction, which goes something like this: Assume $\sqrt{2} = \frac{a}{b}$ in lowest terms (i.e. with $\text{gcd}(a,b) = 1$). Square both sides to give $2b^2 = a^2$. Note that $2|LHS$, so $2|RHS$, which implies that $2|a$. However, 2 must also divide $b$. (Can you see why?) This contradicts the assumption that $\frac{a}{b}$ was in lowest terms. If you drop the assumption that $\frac{a}{b}$ was in lowest terms, you can still achieve a contradiction using induction. Specifically, you can start with the equation $2b^2 = a^2$ and prove the statement "$2^k|a$ for all $k \in \mathbb{N}$” using induction on $k$. This makes it impossible for $a$ to be finite.
Is an integral basis $\{e_i\}$ of $\mathcal O_K$ under the action of $\mu\in Gal(K/\mathbb Q)$ also an integral basis?
The subring $\mathcal O_K$ is stable under $Gal(K/\mathbb Q)$, as one easily verifies. Thus $\mu$ is an automorphism of $\mathcal O_K$ (as a ring, and in particular as a $\mathbb Z$-module). The answer to your question is now seen to be yes, since any automorphism of a free $\mathbb Z$-module will take one basis to another basis.
Lipschitz Number in Gradient Descent
Gradient descent need not always go downhill -- google non-monotone gradient-descent. Can you describe your problem a bit: classification, regression ... ? Is the data sparse ? What program are you using ? Are the gradients smooth, or noisy ? I can recommend Stochastic gradient descent in scikit-learn.
Can a Homeomorphism exist between two discontinuous spaces.
Yes: in particular, if $V_{1,1} = V_{1,2}$ and $V_{2,1} = V_{2,2}$, then the identity is such a map, and is always a homeomorphism, without any assumptions on the nature of the space involved.
Can uncountability of reals be proved only from the axioms?
Since you're introducing non-first-order axioms (such as completeness - and the compactness and Lowenheim-Skolem theorems say you really need to in this context), there is no really satisfying notion of "proof." Instead, we need to go "fully semantic" - the right notion here instead of "$\Gamma$ proves $\varphi$" is "Every model of $\Gamma$ is a model of $\varphi$, and in jargon this would be phrased as "$\Gamma$ entails $\varphi$." The logic you're working in here is second-order logic. The axioms you listed are not enough to entail that $\mathbb{Q}$ is countable, or indeed that $\vert\mathbb{Q}\vert&lt;\vert\mathbb{R}\vert$, since - as noted in the comments - there are proper (hence incomplete) subfields of $\mathbb{R}$ which have cardinality continuum. Incidentally, there's an interesting question here: when does an algebraic structure have a proper substructure of the same cardinality? Structures which don't are called "Jonsson" - see e.g. here - and their study is important in set theory. So what do we need here? A sufficient axiom - and in my opinion, the right choice here - is that the rationals form the smallest ordered field. Another way to phrase this, which you may find more palatable, is "Every subfield of $\mathbb{R}$ contains $\mathbb{Q}$." This does indeed pin down $\mathbb{Q}$ exactly, and hence is enough to entail countability. If you'd rather use ideas about bijections, you can do that too: a set $S$ is countably infinite iff $S$ is Dedekind infinite (= there is a bijection between $S$ and some proper subset of $S$), and for each Dedekind-infinite $T\subseteq S$, there is a bijection between $T$ and $S$. So we can write axioms entailing the countability of $\mathbb{Q}$ in this way, too. And all of this can be expressed in second-order logic. A couple further remarks: Re: the need for second-order logic here, it's important to note that there are other logics stronger than first-order, perhaps the most important one being $\mathcal{L}_{\omega_1,\omega}$, which is the simplest infinitary logic (and is much better behaved than second-order logic in many ways). The general study of logics stronger than first-order logic is called abstract model theory, and there is a fantastic (if difficult) collection on the subject edited by Barwise and Feferman. Re: the above demonstration of the insufficiency of your proposed axioms, an interesting question here: when does an algebraic structure have a proper substructure of the same cardinality? Structures which don't are called "Jonsson" - see e.g. here - and their study is important in set theory.
Integral of sum of two sine waves
$y=\sin \left( x \right)$ and $y=\sin \left( x-\frac{170\pi }{180} \right)$ are the two waves with the latter shifted; phase=170º. Now the sum is: $$y=\sin \left( x \right)+\sin \left( x-\frac{17\pi }{18} \right)$$ Let's find two adjacent x intercepts to integrate: $\sin \left( x \right)+\sin \left( x-\frac{17\pi }{18} \right)=0$ $x=\frac{17\pi }{18}-x+2\pi k,\; k\; belongs\; to\; Z$ $x=\frac{17\pi }{36}+\pi k$ I'll take k=0 and k=1: $$x=\frac{17\pi }{36},\frac{53\pi }{36}$$ Now we integrate: $$\int_{\frac{17\pi }{36}}^{\frac{53\pi }{36}}{\sin \left( x \right)+\sin \left( x-\frac{17\pi }{18} \right)dx=2\left( \cos \left( \frac{17\pi }{36} \right)-\cos \left( \frac{19\pi }{36} \right) \right)}$$ which is approximately 0.348623 Here's an image of the shaded area:
Check whether or not $\sum_{n=1}^{\infty}{1\over n\sqrt[n]{n}}$ converges.
Since $\lim_{n\to\infty}\sqrt[n]n=1$ then for sufficiently large $n$ we have $$\frac1{n\sqrt[n]n}\ge \frac1{2n}$$ Can you take it from here?
Use of FFT in the multiplication of multinomials
Community wiki answer so the question can be resolved: As pointed out in the comments, this can be done using multidimensional FFT, with the exponents of the variables serving as coordinates.
Another proof of $|\vec{a}\times\vec{b}|^2 = |\vec{a}|^2\cdot|\vec{b}|^2 - (\vec{a}\cdot\vec{b})^2$.
Assume without loss of generality $\vec{a}\ne\vec{0}$. Set $\vec{c}=\vec{a}$ so$$(\vec{a}\times\vec{b})\times\vec{a}=a^2\vec{b}-\left(\vec{a}\cdot\vec{b}\right)\vec{a}.$$This is a cross product of perpendicular vectors, so its square modulus is$$|\vec{a}\times\vec{b}|^2a^2=a^4b^2+\left(\vec{a}\cdot\vec{b}\right)^2a^2-2a^2\left(\vec{a}\cdot\vec{b}\right)^2=a^2\left(a^2b^2-\left(\vec{a}\cdot\vec{b}\right)^2\right).$$Now cancel the $a^2$ factors.
How can I compare two series that are same thing, but at different rates?
A series $\sum_n a_n$ converges, by definition, if and only if the sequence of it's partial sums $(A_n)_n$ is convergent. Here, you have that $A_{2n} = H_n - 1 \xrightarrow[n\to\infty]{}\infty$. Therefore, the sequence $(A_n)_n$ is not convergent (as otherwise every of its subsequences would be).
A goat is tied to the corner of a shed
In business and the trades, at least before everything went to decimal notation for fractions, you would almost never see someone write a number as (for example) $\frac 52.$ Instead they would write $2\frac12,$ which by convention was read as a single number equal to $2+\frac12.$ This notation is called a mixed fraction. It is highly discouraged in most mathematical settings, but you can still see it used sometimes, especially in old puzzle books. While I was trying not to be U.S.-centric in this answer, I should acknowledge that mixed fractions are still extremely common in the U.S. for many kinds of measurements, and as noted in the comments are seen in some contexts in at least a few other countries.
Show that the subset of a real ordered field defined by a ring formula has a least upper bound.
Note that the theory of real closed fields is complete. This means that any two real closed fields are elementarily equivalent. You can use these facts to solve your problem: Since the subset defined by $\phi(x)$ in $\mathbb{R}$ is bounded from above, it must have a least upper bound in $\mathbb{R}$ (this follows from the ordering completeness of the ordered field $\mathbb{R}$). Now, you have to verify that the statement $\phantom{aaaaaaaaaaaaaaaaa}$"the set defined by $\phi(x)$ has a least upper bound" can be formalized by a sentence in the language of ordered rings. Assume $\psi$ is the corresponding sentence. Then, because of my previous remarks, $\mathbb{R} \vDash \psi$ and since two real closed fields are elementarily equivalent, it follows that $F \vDash \psi$. $\textbf{Edit}:$ Here is a hint concerning the formalization of the above statement. Start with $$\exists x[\forall y(\phi(y) \to (y&lt;x \vee y=x)) \wedge \phantom{a}...\phantom{a}]$$ This formula expresses that there is an upper bound $x$ for the set defined by $\phi$. Now, try to complete this formula such that it expresses that $x$ is also a least upper bound.
Determine the number of digits in $4^n$
Hint: For a real number $r$, how many digits are in $10^r$ (before the decimal point)? Can you write $4^n$ as $10^r$ for some $r$? (Hint: Logarithm).
How to prove this, $ | \Delta f | \leqslant n | \nabla^2 f| $
Use Cauchy-Schwarz inequality $$ |\Delta f|=\left|\sum\limits_{i=1}^n \partial_i^2 f\right|= \left|\sum\limits_{i=1}^n 1\cdot\partial_i^2 f\right|\leq \left(\sum\limits_{i=1}^n 1^2\right)^{1/2}\left(\sum\limits_{i=1}^n (\partial_i^2 f)^2\right)^{1/2}= \sqrt{n}|\nabla^2 f|\leq n|\nabla^2 f| $$
Find a positive number $\delta<2$ such that $|x−2| < \delta \implies |x^2−4| < 1$
The choice $\frac{1}{|x+2|}$ is inappropriate for a more fundamental reason. We want a $\delta$ such that for any $x$ such that $|x-2|\lt \delta$, we have $|x^2-4|\lt 1$. So in particular $\delta$ must not depend of $x$. We now poceed to find an appropriate $\delta$. Note that $|x^2-4|=|x-2||x+2|$. We can make $|x-2|$ small by choosing $\delta$ small, but the $|x+2|$ term could spoil things. Suppose however that we make $\delta=\frac{1}{5}$. If $|x-2|\lt \delta$, then $2-\frac{1}{5}\lt x\lt 2+\frac{1}{5}$. Thus $x+2$ is positive and less than $5$. It follows that $|x+2|\lt 5$, and therefore if $|x-2|\lt \delta$ we have $$|x-2||x+2|\lt \frac{1}{5}\cdot 5=1.$$
Finding a certain rotation matrix
Here is simple way to achieve it. We want vector $U=(a_1,a_2, \cdots a_n)^T$ to be mapped onto vector $V=(1,1, \cdots 1)^T$ by a certain matrix (we assume, as you do, that $U$ and $V$ have the same norm ; the $T$s are there for considering them as column vectors). Let $D=U-V$ and $N=D/\|D\|$ its normalized version (therefore $N^TN=1$, a property that will be sueful later). Then consider the so-called Householder operator: $$H_N:=I_n-2NN^T $$ It has the property : $$H_N^2=H_NH_N=(I_n-2NN^T )(I_n-2NN^T )=I_n-4NN^T+4N\underbrace{N^TN}_{1}N^T=I_n$$ proving that it is symmetry, in fact a symmetry with respect to the hyperplane with normal vector $N$. Therefore $$H_NU=V$$ (this can be established by computation also)
Finding all truth assignment to 2SAT
Construct a graph where each variable and its negation are nodes in this graph. If there are $n$ variables, this should be a graph with $2n$ nodes. Note that $A\lor B$ is equivalent to $\lnot B\implies A$ and $\lnot A\implies B$. Hence each clause in the 2-SAT would give 2 edges. If the graph has a variable and negation in the same strongly connected component, then the 2-SAT is unsatisfiable. Using the graph, if $A\implies\lnot A$, it means that $\lnot A$. If a variable cannot be deduced from such, you can arbitrarily set it true or false to generate all possibilities.
Poisson Process. Three independent processes
Let $X_{i}$ be the time until the first catch for $i=1,2,3$. Hence, $ X_{i} \sim \exp(2), \,\, i=1,2,3$. You are interested in the time when everyone of them caught a fish, namely, in the distribution of $X_{(3)} = \max\{ X_{1}, X_{2}, X_{3}\}$. You can show that $P(X_{(3)}\le x) = [F_{X}(x)]^3 = (1-e^{-2x})^3$. Now to find $E[X_{(3)}]$ you can use $ \int tf_{X_{(3)}}(t)dt$ or $E[X_{(3)}] = \int(1-F(t))dt $ for non-negative random variables, or alternatively, utilize some properties of the exponential distribution, i.e., \begin{align} E[X_{(3)}] &amp;= E[min\{X_1, X_2, X_3\}] + A\\ &amp;=1/6 + E[A], \end{align} where the first summand stems from the exponential distribution of the minimum and $A$ is the minimum among the two others who left, hence using the memoryless property of the exponential distribution you get $$ E[X_{(3)}] = 1/6 + 1/4 + E[X_1]= 1/6 + 1/4 +1/2. $$
Categorical proof of a property of direct limits of abelian groups
Let $G = \lim\limits_{\longrightarrow} G_\alpha$ and $G' = \bigcup i_\alpha(G_\alpha)$. The maps $i_\alpha$ map $G_\alpha$ into $G'$ and are compatible with the $f_{\beta,\alpha}$. Let $T$ be a set with maps $\eta_\alpha : G_\alpha \to T$ that are compatible with the $f_{\beta,\alpha}$. There exists a unique map $\phi : G\to T$ such that for all $\alpha$, $\phi\circ i_\alpha = \eta_\alpha$. Define a map $f : G'\to T$ by setting $f(g) = \eta_\alpha(g_\alpha)$ if $g = i_\alpha(g_\alpha)$. If $f$ is well-defined, then it is the unique map such that for all $\alpha$, $f\circ i_\alpha = \eta_\alpha$. To see that $f$ is well-defined, suppose $i_\alpha(g_\alpha) = i_\beta(g_\beta)$ for some $\alpha$ and $\beta$. Then $$\eta_\alpha(g_\alpha) = \phi(i_\alpha(g_\alpha)) = \phi(i_\beta(g_\beta)) = \eta_\beta(g_\beta).$$ Hence $f$ is well-defined, and consequently $G'$ is a direct limit of the $G_\alpha$. Thus $G'$ is in set bijection with $G$; since, in addition $G'\subset G$, then $G' = G$. Now $h(G) = h(G') = \bigcup hi_\alpha(G_\alpha) = \bigcup h_\alpha(G_\alpha)$, proving the first part. For the second part, note that for every $\alpha$, $\operatorname{ker}(h) \cap i_\alpha(G_\alpha) = i_\alpha(\operatorname{ker}(h_\alpha))$. As $G = \bigcup i_\alpha(G_\alpha)$ and $\operatorname{ker}(h)\subset G$, then $\operatorname{ker}(h) = \bigcup\, (\operatorname{ker}(h)\cap i_\alpha(G_\alpha)) = \bigcup i_\alpha(\operatorname{ker}(h_\alpha))$.
A Function of a Convolution (Laplace)
I think I've figured it out. I'm not sure what the norms about answering your own question are, but I post this in case it's helpful to others. 1) Define $v_n = a_n \epsilon_n = Expo(1/a_n)$. 2) Define $X = \sum_n v_n$ As $X$ is a sum of independent random variables, its PDF $f(x)$ is a convolution of those random variables, thus the Laplace transform of $X$ is, i.e. the product of the individual Laplace transforms of the $v_i$. Because all $a_n$ are distinct (?), we can write this using the partial fractions notation. $$\mathcal{L}\{f(x)\} = F^X(s) = \Pi_n F^{V_n}(s) = \sum_n \frac{1}{\Pi_{n,N}}F^{V_n}(s)$$ The RHS by definition is thus: $$\sum_n \frac{1}{\Pi_{n,N}}\int_0^\infty f^{V_n}(t) e^{-st} dt$$ Because everything is nice and convergent, I think we can switch the order of summing and integrating to get: $$\int_0^\infty \left[\sum_n \frac{1}{\Pi_{n,N}}f^{V_n}(t)\right] e^{-st} dt$$ The LHS is: $$\int_0^\infty f^X(t) e^{-st} dt$$ Now, imagine I wanted to know the Laplace transform of $g(t)f^X(t)$, i.e. $$\int_0^\infty g(t) f^X(t) e^{-st} dt$$ I think, and this is the part I'm not sure about, that is equivalent to writing the following on the RHS: $$\int_0^\infty g(t) \left[\sum_n \frac{1}{\Pi_{n,N}}f^{V_n}(t)\right] e^{-st} dt$$ Because we assume that $E(g(v_n) &lt; \infty$ for all $i$, I think we can again switch the order of integration / summation to get: $$\sum_n \frac{1}{\Pi_{n,N}} \int_0^\infty g(t) f^{V_n}(t) e^{-st} dt$$ To complete the result, note that at $s=0$ we recover the expectation. $$E(g(X)) = \sum_n \frac{1}{\Pi_{n,N}} E(g(a_n \epsilon_1))$$