title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Problem on $\operatorname{Hom}(\mathbb Z_6,R^*\oplus C^*)$
Hint: Since $\mathbb{Z}/6 \cong \mathbb{Z}/2 \times \mathbb{Z}/3$, for any group $G$ a homomorphism $f : \mathbb{Z}/6 \to G$ correspondents to a pair of homomorphisms $(g,h)$ with $g : \mathbb{Z}/2 \to G$ and $h : \mathbb{Z}/3 \to G$.
Measure theory question I encountered
$$F(t+h) - F(t) -h \int_X \frac{\partial f}{\partial t} (t,x) d\mu(x) = \int_X \left(f(x,t+h) - f(x,t) - h\frac{\partial f}{\partial t} (t,x) \right)d\mu(x) $$ Now using the Taylor theorem: $$ = \int_X \int_t^{t+h}\left( \frac{\partial f}{\partial t} (u,x) du - \frac{\partial f}{\partial t} (t,x)\right)du d\mu(x) \\=\int_X \int_a^b 1_{t\le u\le t+h}\left( \frac{\partial f}{\partial t} (u,x) du - \frac{\partial f}{\partial t} (t,x)\right)du d\mu(x) $$ using the continuity of $$ (u,t)\to \frac{\partial f}{\partial t} (u,x) $$ and the fact that $X\times [a,b]$ is compact, you can use the dominated convergence theorem (with domination by a constant) to get that this quantity goes to $0$.
Exponential equation, variable on both sides
Rewrite this as $$ -\frac{Rvh}{v_1} = -vht\cdot e^{-vht} $$ Applying the Lambert $W$ function we get $$ W\left(-\frac{Rvh}{v_1}\right) = -vht\\ t = -\frac{W\left(-\frac{Rvh}{v_1}\right)}{vh} $$ Using the $W$ function isn't much more than a rewriting, but there is nothing else that can be done when we have $t$ both in the exponent and outside it.
Proof of Alternate, Corresponding and Co-interior Angles
Not everything has a proof. As sawarnik link says that there are axioms and postulates regarding this fact. We havent been able to find 2 parallel lines with transversal who's alternate and co interior angles arent equal :)! The day someone does euclid and others will be proved wrong. But anyways u can always geometrically prove this by making a parallel line transversal setup. Cut the co interior angles briefly and putg them next to each other you will see that they will be exactly like a protractor or semi circle making 180° similarly corresponding interior and alternative interior angles will overlap proving that they are equal...If they aren't you will become famous!!
Positive semidefinite versus all the eigenvalues having non-negative real parts
In terms of the second question, this is true. Since $B$ is a diagonal positive definite matrix, $B^{\frac{1}{2}}$ is invertible. Then $B^{\frac{1}{2}}ABB^{-\frac{1}{2}}=B^{\frac{1}{2}}AB^{\frac{1}{2}}=(B^T)^{\frac{1}{2}}AB^{\frac{1}{2}}$, which is positive semi-definite. Therefore, all the eigenvalues of $AB$ have non-negative real parts.
Help with improper integrals containing partial fractions
You did not make any mistakes, the thing you're missing with the limits is that $\ln a-\ln b=\ln\frac{a}{b}$. Evaluate the limit, noting that $\ln x \to 0$ as $x \to 1$: $$3\lim_{b\to \infty}\left( \ln x - ln(x+1) + \frac{1}{x+1}\right) \biggm|_{1}^b $$ $$= 3\lim_{b\to \infty}\left( \ln \frac{x}{x+1} + \frac{1}{x+1}\right) \biggm|_{1}^b $$ $$= 3\lim_{b\to \infty}\left( \ln \frac{b}{b+1} + \frac{1}{b+1} - \ln \frac{1}{1+1} - \frac{1}{1+1}\right) $$ $$= 3\left( 0 + 0 - \ln \frac{1}{2} - \frac{1}{2}\right) $$ $$= 3\ln 2 - \frac{3}{2}$$ Using the same method, the second limit can be evaluated.
What is the mechanism of Eigenvector?
Matrix as a map How does a matrix transform the locus of unit vectors? Pick an example matrix such as $$ \mathbf{A} = \left[ \begin{array}{rr} 1 & -1 \\ 0 & 1 \\ \end{array} \right]. $$ As noted by @Widawensen, a convenient representation of unit circle is the locus of vectors $\mathbf{S}$: $$ \mathbf{S} = \left[ \begin{array}{l} \cos \theta \\ \sin \theta \end{array} \right], \quad 0 \le \theta \lt 2\pi $$ The matrix product shows the mapping action of the matrix $\mathbf{A}$: $$ \mathbf{A} \mathbf{S} = \left[ \begin{array}{cc} \cos (\theta )-\sin (\theta ) \\ \sin (\theta ) \end{array} \right] $$ The plots below show the colored vectors from the unit circle on the left. On the right we see how the matrix $\mathbf{A}$ changes the unit vectors. Singular value decomposition To understand the map, we start with the singular value decomposition: $$ \mathbf{A} = \mathbf{U} \, \Sigma \, \mathbf{V}^{*} $$ The beauty of the SVD is that every matrix has a singular value decomposition (existence); the power of the SVD is that it resolves the four fundamental subspaces. The singular value decomposition is an eigendecomposition of the matrix product $\mathbf{A}^{*} \mathbf{A}$. The singular values are the square root of the nonzero eigenvalues $$ \sigma \left( \mathbf{A} \right) = \sqrt{ \lambda \left( \mathbf{A}^{*} \mathbf{A} \right) } $$ The singular values are, by construction, positive and are customarily ordered. For a matrix of rank $\rho$, the expression is $$ \sigma_{1} \ge \sigma_{2} \ge \dots \ge \sigma_{\rho} > 0 $$ The normalized eigenvectors are the column vectors of the domain matrix $\mathbf{V}$. The column vectors for the codomain matrix $\mathbf{U}$ are constructed via $$ \mathbf{U}_{k} = \sigma_{k}^{-1} \left[ \mathbf{A} \mathbf{V} \right]_{k}, \quad k=1,\rho. $$ Graphically, the SVD looks like this: The first column vector is plotted in black, the second in blue. Both coordinate systems are left-handed (determinant = -1). The SVD orients these vectors to align the domain and the codomain. Notice in the mapping action that some vectors shrink, others grow. The domain and codomain have different length scales, and this is captured in the singular values. Below, the singular values are represented as an ellipse with equation $$ \left( \frac{x}{\sigma_{1}} \right)^{2} + \left( \frac{y}{\sigma_{2}} \right)^{2} = 1. $$ SVD and the map Finally, we bring the pieces together by taking the map image $\mathbf{A}\mathbf{S}$ and overlaying the basis vectors from the codomain $\mathbf{U}$, scaled by the singular values. The black vector is $\sigma_{1} \mathbf{U}_{1}$, blue is $\sigma_{2} \mathbf{U}_{2}$. Powers Repeated applications of the map accentuate the effects of the map. For this example $$ \mathbf{A}^{k} = \left[ \begin{array}{cr} 1 & -k \\ 0 & 1 \\ \end{array} \right], \quad k = 1, 2, 3, \dots $$ The first maps in the sequence are shown below on a common scale:
Can I think of this as an eigenvalue/eigenvector problem?
If you find an eigenvector $v$ for $S$ with eigenvalue $\lambda$, you can then take $P$ to be the projection onto the span of $v$: concretely, $$ P=\frac1{(v^*v)^{1/2}}\,vv^*. $$ This works, because if $Sv=\lambda v$, then for any $x$ you have $SPx=\lambda Px$, and so $PSPx=Px$. As this holds for any $x$, you have $PSP=\lambda P$.
Expected value (mean) is zero.
When you write something like $$Q \sim \mathcal{N}(\mu_Q, \sigma_Q^{2}I),$$ then you are saying that $$\mathbb{E}[Q] = \mu_Q.$$
When is the inverse image of continuous function a submanifold
You can't really say much - given an arbitrary closed set $A \subseteq \mathbb{R}^n$, the distance function $f(x) = d(A,x)$ is continuous and $A = f^{-1}(0)$ so any closed subset is the inverse image of a continuous function but most of them are very far from being manifolds.
Find the general solution of $ y'''- y'' - 9y' +9y = 0 $
There is a "universal" approach to this type of problem. We look for solutions of the form $e^{mx}$. Substituting $y=e^{mx}$ into our DE, we get after a short while $m^3e^{mx}-m^2e^{mx}-9me^{mx}+9e^{mx}=0$. This will be satisfied if and only if $m^3-m^2-9m+9=0$. The above cubic equation happens to be easy to solve. It can be rewritten as $m^2(m-1)-9(m-1)=0$, and then as $(m-1)(m^2-9)=0$. The solutions are $m=1$, $m=3$, and $m=-3$. So $e^x$, $e^{3x}$, and $e^{-3x}$ are solutions of our DE. Because our DE is linear and right-hand side is $0$, that means that for any constants $A$, $B$, and $C$, $y=Ae^x+Be^{3x}+Ce^{-3x}$ is a solution. Because our differential equation has order $3$, and (it turns out) the $3$ solutions we found are linearly independent, all solutions are of the form $y=Ae^x+Be^{3x}+Ce^{-3x}$. Remark: The above is a more or less universal method for dealing with linear differential equations that have constant coefficients and right-hand side $0$. There are some complications. If the polynomial equation that we get has multiple roots (example: $(m-1)(m-3)^2$) then some adjustment needs to be made. It turns out that in that example, $e^x$, $e^{3x}$, and $xe^{3x}$ will give us a set of basic solutions. Also, if some of the roots of the polynomial we get are non-real, we proably want to make an adjustment, and express a solution like $y=e^{5ix}$, which is technically correct, in terms of sines and cosines.
"Prove that $\sup\{\sqrt2,\sqrt{2+\sqrt2},\cdots\}=2$": How to show that the set is bounded above?
How can I show that this set is bounded above in an elementary way? Denote by $a_k$ the general term: $a_1 = \sqrt{2}$, $a_3 = \sqrt{2+\sqrt{2+\sqrt{2}}}$ etc. Suppose $a_k \leq 2$. Then $$a_{k+1} = \sqrt{2+a_k} \leq \sqrt{2+2} = 2 $$ I don't know of any elementary (read: no sequences or convergence theorems) way of showing it is the least upper bound.
Inquiry into operator precedence grammar
I would like to know about the specific mathematical properties is presents. From what I have gleaned here, an operator precedence grammar is a preorder on a grammar with special properties (outlined in the first link). The idea is that they define something like the order of operations for arithmetic expressions in integers (where multiplication takes precedence over addition, etc.) More broadly speaking it is a special binary relation on the grammar. I would be interested in knowing whether it is a kind of Polish notation The grammar is not a notation any more than the statement "$2$ is less than $4$" is a notation. If you are asking whether or not authors use any sort of Polish notation when they talk about precedences, then the answer is maybe. An author can choose to use such a notation or not. The book I linked to above uses infix notation, not prefix or postfix notation. In any case, notation does not change the meaning of what an operator precedence grammar is. and also whether it might be useful for representing right ideals in a ring. I am a ring theorist, but I haven't heard of any connections to this, nor can I immediately spot any connection to ideals or rings.
Orthogonal projection in Hilbert Space proof
Ok, I think: $$t(f-u,v)\leq (f-u,u)\ \forall t \in \mathbb{R}$$ But $(f-u,u)$ is fixed, so if $t\neq 0$ that implies that $(f-u,v)$ must be equals to $0$. Is this right?
Sample-based median calculation
Of course you can't find the median exactly without examining all the data, but you could use the sample median as an estimator of the population median.
Does the existence of an onto mapping $f: \mathbb{N} \to A$ indicate the existence of a one-one mapping $g: A \to \mathbb{N}$?
Since $\mathbb{N}$ has an inherent well-order (viz., $\omega$), you don't need AC to construct $$ g(a):=\min\{n\in\mathbb{N}\mid f(n)=a\}. $$
Enumerations of the rationals with summable gaps $(q_i-q_{i-1})^2$
Summary: There exists a (bijective) enumeration $(q_i)_{i\geqslant0}$ of the rationals in $[0,1]$ such that, for every $a\gt1$, the series $\sum\limits_i|q_i-q_{i-1}|^a$ converges. This result is optimal in the sense that no (bijective) enumeration $(q_i)_{i\geqslant0}$ of the rationals in $[0,1]$ is such that the series $\sum\limits_i|q_i-q_{i-1}|$ converges. Let us show the first point. For every $n\geqslant0$ and $0\leqslant k\lt 2^n$, define the interval $I_{2^n+k}$ as $$I_{2^n+k}=\left\{\begin{array}{ll}(k\cdot2^{-n},(k+1)\cdot2^{-n}]&\text{if $n$ is even},\\ (1-(k+1)\cdot2^{-n},1-k\cdot2^{-n}]&\text{if $n$ is odd}.\end{array}\right. $$ Thus, for every $n\geqslant0$, $(I_{2^n+k})_{0\leqslant k\lt 2^n}$ is a partition of $(0,1]$ into $2^n$ intervals of length $2^{-n}$. For every $i\geqslant 2^n$, if $x$ is in $I_i$ and $y$ in $I_{i+1}$, then $|x-y|\leqslant c\cdot2^{-n}$, probably for $c=2$ and certainly for $c=42$. Define recursively $(q_i)_{i\geqslant0}$ as follows: let $q_0=0$ and, for every $n\geqslant0$ and $0\leqslant k\lt 2^n$, let $q_{2^n+k}$ denote the rational in $I_{2^n+k}$ not already in $\{q_i\mid i\lt 2^n+k\}$ whose reduced fraction has minimal denominator and, if several such rationals exist, minimal numerator. Then $(q_i)_{i\geqslant0}$ enumerates the rationals in $[0,1]$. Furthermore, for every $n\geqslant0$, if $2^n\leqslant i\lt 2^{n+1}$, then $$|q_i-q_{i-1}|\leqslant c\cdot2^{-n}.$$ Hence the slice of the sum $\sum\limits_i|q_i-q_{i-1}|^a$ for $2^n\leqslant i\lt 2^{n+1}$ is at most $$2^n\cdot(c\cdot2^{-n})^a=c^a\cdot2^{-(a-1)n}.$$ Since $a\gt1$, summing these shows that the complete series converges. QED. The fact that the series $\sum\limits_i|q_i-q_{i-1}|$ diverges for every enumeration is easy. Define recursively the sequence $(N(n))_{n\geqslant0}$ as follows: let $N(0)=0$ and, for every $n\geqslant0$, let $N(n+1)$ denote the smallest integer $i\geqslant N(n)$ such that $$ |q_i-q_{N(n)}|\geqslant\tfrac12. $$ Then $N(n)$ is finite for every $n$ (why?) and the triangular inequality yields $$\sum\limits_i|q_i-q_{i-1}|\geqslant\sum\limits_n|q_{N(n)}-q_{N(n-1)}|\geqslant\sum\limits_n\tfrac12, $$ which is infinite.
Does a data-dependent sampling rule induce correlation?
Without knowing more about how $p$ depends on the data in the first half we can't tell much of anything. The two variables $\hat{\theta}_1$ and $\hat{\theta}_2$ might be correlated, and they might not. They might be independent, and they might not (though no random variables can be both independent and correlated). It's perfectly reasonable for our method of selecting $p$ to be always setting $p=\frac12$. Determinism is a subset of indeterminism, and it's a nice source of examples and counter-examples in probability and statistics. In this case, we clearly have independence of the variables, and since we have independence they aren't correlated. Update: If one wanted a stochastic choice of $p$, that would be fine too here. Any stochastic choice of $p$ that doesn't depend on $\hat{\theta}_1$ would have the same effect. For more fun, with carefully chosen $X_i$ and $Y_i$ you can actually make it so that $\hat{\theta}_1$ and $\hat{\theta}_2$ are independent even when $p$ explicitly depends on $\hat{\theta}_1$. One example of such a choice happens with a stream of length $2$, with all the $X_i$ i.i.d uniform on $[0,1]$ and all the $Y_i$ i.i.d on $[-1,0]$. The conditional probability that $\hat{\theta}_1$ was drawn from $X$ given that we observe it to be any particular value is always $\frac12$, and that suffices to give independence. In other examples, it's feasible to have dependence between the variables. Suppose the $X_i$ and $Y_i$ are all i.i.d, and in fact just suppose they're uniform on $[0,1]$ (denote this $U(0,1)$). Our data stream is going to have two elements so that $\hat{\theta}_1$ is based only on the first element and $\hat{\theta}_2$ is based solely on the second element. The rule we're going to use for selecting $p$ is that $p=1$ if $\hat{\theta}_1>0$ and $p=0$ otherwise. In this extremely contrived example (and other more complicated examples too...just making a point without muddying up the waters), we'll be able to show the variables are correlated. In particular, note that $\hat{\theta}_1$ is actually just $U(-1,1)$, and $\hat{\theta}_2$ is actually just $\text{sgn}(\hat{\theta}_1)U(0,1)$, where the signum function is equal to $-1$ for negative values and $1$ for positive values, and where we really don't care about its value at $0$ because the set $\{0\}$ has measure $0$ and doesn't affect our probabilities. Sparing you the busywork (it's a good exercise), the population Pearson correlation coefficient between $\hat{\theta}_1$ and $\hat{\theta}_2$ in this case is $\frac18$ -- which we sort of expected since $\hat{\theta}_2$ is non-negative when $\hat{\theta}_1$ is non-negative and $\hat{\theta}_2$ is non-positive when $\hat{\theta}_1$ is non-positive. Update: Choosing $p=0,1$ was just a tool to make computing the correlations easy. The correlation of $\frac18$ we observed varies smoothly as a bivariate function of our two separate choices of $p$ ($1$ when $\hat{\theta}_1$ is positive and $0$ otherwise) and can take on any value from $-\frac18$ to $\frac18$. Except for extraordinarily bad choices of $p$ (I think the only inflections happen when $p$ is the same for all values of $\hat{\theta}_1$) this correlation is still non-zero. Even when we have a dependence between the variables, it's perfectly feasible for them to not be correlated with suitably chosen parameters. Keep the 2-item data stream from the last example. Let the $X_i$ be i.i.d uniform on the set $[-1,1]$, and let the $Y_i$ be i.i.d uniform on the set $[-2,-1]\cup[1,2]$. Select $p=1$ whenever $\hat{\theta}_1\in[-1,1]$ and $p=0$ otherwise. This has dependence just like in the last example for basically the same reason. The main difference is in the symmetry of the sets in question and in how we select $p$. When we sample $\hat{\theta}_1$ and $\hat{\theta}_2$, the result is either two i.i.d samples from $U(-1,1)$ or two i.i.d samples from the uniform distribution on $[-2,-1]\cup[1,2]$. Using that fact we can quickly compute that the population Pearson correlation coefficient between these two random variables is $0$. Update: As in the last example, the reliance on crisp selections of $p$ was just out of laziness. It turns out that any selection of $p$ in this problem actually still leaves you with a population Pearson correlation coefficient of $0$ since all the $X_i$ and $Y_i$ separately have a correlation of $0$. That's easier to argue with crisp selections of $p$ since we're strictly considering i.i.d distributions, but it's a fact that any two independent uniform distributions symmetrically distributed around $0$ with positive variance are uncorrelated.
How to prove that if $f(x)$ and $g(x)$ have the same asymptotic notation, their integral functions have it too.
It is not true, there are a counterexample : $f(x)=\begin{cases} \sin x\quad\text{ if }x\in\big[0,\pi\big]\\ 0\qquad\;\text{ if }x\in\big]\pi,+\infty\big[ \end{cases}\quad,$ $g(x)=0\quad\text{ for all }x\in\big[0,+\infty\big[\;.$ It results that $f,g:\big[0,+\infty\big[\to\big[0,+\infty\big[\;$ are continuous functions , $f(x)=O\big(g(x)\big)\;$ as $\;x\to+\infty\;,$ indeed there exist $\;M=1>0\;$ and $\;x_0=\pi\;$ such that $0\leqslant f(x)\leqslant Mg(x)\;$ for all $\;x\geqslant x_0\;.$ But $F(x)\ne O\big(G(x)\big)\;$ as $\;x\to+\infty\;,$ indeed $F(x)=\displaystyle\int_0^x f(t)dt=\begin{cases} 1-\cos x\quad\text{ if }x\in\big[0,\pi\big]\\ 2\qquad\qquad\text{ if }x\in\big]\pi,+\infty\big[ \end{cases}\;,$ $G(x)=\displaystyle\int_0^x g(t)dt= 0\quad\text{ for all }x\in\big[0,+\infty\big[\;,$ consequently there do not exist any $\;M>0\;$ and any $x_0\geqslant0\;$ such that $0\leqslant F(x)\leqslant MG(x)\;$ for all $\;x\geqslant x_0\;.$ Addendum : If we want that the property of the original poster is true, it is necessary to add another hypothesis. An additional hypothesis could be the following one : there exists $\;x^*\geqslant0\;$ such that $\;g(x^*)>0\;.$ Theorem : If $\;f,g:\big[0,+\infty\big[\to\big[0,+\infty\big[\;$ are continuous functions , there exists $\;x^*\geqslant0\;$ such that $\;g(x^*)>0\;$ and $f(x)=O\big(g(x)\big)\;$ as $\;x\to+\infty\;,\;$ then $\;F(x)=O\big(G(x)\big)\;$ as $\;x\to+\infty\;,$ where $\;\displaystyle F(x)=\int_0^x f(t)dt\;$ and $\;\displaystyle G(x)=\int_0^x g(t)dt\;.$ Proof : From hypothesis we know that $f(x)=O\big(g(x)\big)\;$ as $\;x\to+\infty\;,$ so there exist $\;M>0\;$ and $\;x_0\geqslant0\;$ such that $0\leqslant f(x)\leqslant Mg(x)\;$ for all $\;x\geqslant x_0\;.\quad\color{blue}{(*)}$ Since $\;g(x)\;$ is a non-negative continuous function and there exists $\;x^*\geqslant0\;$ such that $\;g(x^*)>0\;,\;$ then it follows that $\displaystyle\int_0^{x^*+1}g(t)dt>0\;.$ By letting $\;x_1=\max\big\{x_0,\;x^*+1\big\}>0\;,\;$ we get that $\displaystyle\int_0^{x_1}g(t)dt\geqslant\int_0^{x^*+1}g(t)dt>0\;.$ Moreover, by letting $\;N=\max\left\{M,\;\dfrac{\int_0^{x_1}f(t)dt}{\int_0^{x_1}g(t)dt}\right\},$ we get that : $\displaystyle\int_0^{x_1}f(t)dt\leqslant N\!\int_0^{x_1}g(t)dt\;.\quad\color{blue}{(**)}$ From $\;(*)\;$ it follows that $\displaystyle\int_{x_1}^x\!\!\!f(t)dt\leqslant M\!\!\int_{x_1}^x\!\!\!g(t)dt\leqslant N\!\!\int_{x_1}^x\!\!\!g(t)dt\;\;$ for all $\;x\geqslant x_1\;,$ $\displaystyle\int_{x_1}^x\!f(t)dt\leqslant N\!\!\int_{x_1}^x\!g(t)dt\;\;$ for all $\;x\geqslant x_1\;.\quad\color{blue}{(*\!*\!*)}$ By adding the inequalities $\;(**)\;$ and $\;(*\!*\!*)\;,\;$ we get that $\displaystyle\int_0^x\!f(t)dt\leqslant N\!\!\int_0^x\!g(t)dt\;\;$ for all $\;x\geqslant x_1\;.$ Hence, there exist $\;N>0\;$ and $\;x_1>0\;$ such that $0\leqslant F(x)\leqslant NG(x)\;$ for all $\;x\geqslant x_1\;$ and it means that $F(x)=O\big(G(x)\big)\;$ as $\;x\to+\infty\;.$
relationship between topology and convergences
What Owen said is correct. It is not sufficient to specify a topology by defining what it means for sequences to converge. For example, the discrete topology and the cocountable topology on $\mathbb{R}$ yield nonhomeomorphic spaces, but their notions of sequential convergence are the same: a sequence converges iff it is eventually constant. In practice, when defining topologies in this way, the definition of convergence you want to make for sequences still makes sense if you replace the word "sequence" with the word "net", and so fixing this is quite easy. In particular, in your case, if you replace the word "sequence" with the word "net", there should be a unique topology on $C_c^{\infty}$ and $C^{\infty}$ whose notion of convergence agrees with your given condition. Of course, you can't just make any old definition of convergence. In order for it to define a topology, your definition has to satisfy a list of axioms---see, for example, Theorem 3.4.8 on pg. 105 here (note that if you're reading this in the future, these numbers may have changed). This theorem gives the precise formulation of how one may define topologies by declaring what it means for nets to converge.
Continuity of inverse using epsilon delta
Yes, we have $g(f(x)) = x$ for all $x \in [a,b]$. Furthermore $f(g(y))=y$ for all $y \in [c,d].$ With sequences: let $y_0 \in [c,d]$ and $(y_n))$ a sequence in $[c,d]$ such that $y_n \to y_0$. We have to show that $g(y_n) \to g(y_0)$. To this end put $x_n:=g(y_n)$. Then $(x_n)$ is a sequence in $[a,b]$. Let $x_0$ be an accumulation point of $(x_n)$. Then there is a subsequence $(x_{n_k})$ such that $x_{n_k} \to x_0$ as $k \to \infty$. Since $f$ is continuous, we have $y_{n_k}=f(x_{n_k}) \to f(x_0)$. This gives $f(x_0)=y_0$, hence $x_0=g(y_0)$. Conclusion: the bounded sequence $(g(y_n))$ has exactly one accumulation point: $g(y_0)$. This shows that $g(y_n) \to g(y_0)$.
What is independence of events (in case of tossing an unbiased coin 3 times)?
If you want a bit of intuition, think like this: if you know $A$ happened you have no idea if $B$ happened or not. With probability half you got $TTT$ and then $B$ happened and with the same probability you got $HHH$ and then $B$ didn't happen. So you have no idea, the probability of $B$ happening is exactly the same as it was if you didn't know that $A$ happened. Now, with $C$ it is different. The probability of the event $B$ in general is $\frac{1}{2}$. However, if you know $C$ happened then with probability $\frac{2}{3}$ you know that you got either $HHT$ or $HHH$. So the probability that $B$ did happen is not more than $\frac{1}{3}$ in that case. There is dependence indeed. Anyway, this is just intuition. There are formal definitions for independence.
proper ideals of $\prod_{i\in I}A_i$
Yes. For any non-principal ultrafilter $U$ on $I$, you can consider the ideal of elements of $\prod A_i$ which are equal to zero $U$-almost everywhere. The quotient of $\prod A_i$ by this ideal is the corresponding ultraproduct.
Using definite quantification in logic
Try and do these translations of complex statements step by step: divide and concur. And, stick to the division between subject and term (those things about which you say something) and predicate term (what you say about those things), so you can follow the standard forms for translating those. 'Haters hate the hated' is of course 'all haters hate anyone who is hated' So, subject term: 'haters' Predicate term: 'hate anyone who is hated' So this becomes: $$\forall x (\text{'x is a hater'} \rightarrow \text{'x hates anyone who is hated'})$$ Now, 'x is a hater' translates as: $$\exists w H(x,w)$$ while 'x hates anyone who is hated' can be paraphrased as 'all those who are hated are hated by x', which has: Subject term: 'hated' Predicate term: 'hated by x' So the form here is: $$\forall y (\text{'y is hated'} \rightarrow \text{'x hates y'})$$ 'y is hated' becomes: $$\exists z H(z,v)$$ and 'x hates y' is of course: $$H(x,y)$$ OK, so we can plug in the last two parts: $$\forall y (\exists z H(z,y) \rightarrow H(x,y))$$ And now we can plug that into the very first form: $$\forall x (\exists w H(x,w) \rightarrow \forall y (\exists z H(z,y) \rightarrow H(x,y)))$$
Birthday problem with an arbitrary "year" and number of overlapping birthdays
Suppose you had $N$ possible birthdays and $M$ people (with birthdays chosen independently and uniformly). The probability of $k$ people sharing a birthday seems difficult to calculate, but much easier is the expected number of (unordered) $k$-tuples sharing a birthday. Namely, there are ${M \choose k}$ possible $k$-tuples, the probability that a particular $k$-tuple share the same birthday is $N^{1-k}$, so the expected number is ${M \choose k} N^{1-k}$. The expected number of $k$-tuples is an upper bound on the probability of existence of such a $k$-tuple. If $M$ is very large, ${M \choose k} \approx M^k /k!$ so the expected number of $k$-tuples is approximately $M^k N^{1-k}/k!$. Thus in your case with $M = 4.1 \times 10^6$, $N = 5.05 \times 10^7$ and $k = 7044$, the expected number is ridiculously small, about $9.5 \times 10^{-31722}$. Even with $k = 6$, the expected number is about $0.020$. On the other hand, for $k = 5$ the expected number is about $1.484$. So it seems likely that the maximum number of people sharing a birthday is $5$.
How can I prove: $\sum_{n\leq x}\frac{\phi(n)}{n^2} = \frac{\log x}{\zeta(2)}+\frac{C}{\zeta(2)} + A + O\left(\frac{\log x}{x}\right)$
Let we implement the approach suggested by Winther in the comments, with a minor variation. From $$ \sum_{n\leq x}\varphi(n) = \frac{x^2}{2\zeta(2)}+O(x\log x) \tag{1}$$ and Abel's summation formula we get: $$ \sum_{n\geq x}\frac{\varphi(n)}{n^2}=\frac{1}{2\zeta(2)}+O\left(\frac{\log x}{x}\right)+2\int_{1}^{x}\left( \frac{u^2}{2\zeta(2)}+O(u\log u)\right)\frac{du}{u^3}\tag{2} $$ and the claim readily follows by rearranging terms.
When will open morphisms be flat?
Here is one result, which answers your Question 3: Theorem [EGA IV$_{3}$, Cor. 15.2.3]. Let $Y$ be a locally noetherian scheme, and let $f \colon X \to Y$ be a morphism locally of finite type. Let $x \in X$ be a point with image $y = f(x)$. Suppose the following conditions hold: the morphism $f$ is universally open at the generic point of every irreducible component of $f^{-1}(y)$ containing $x$; the fiber $f^{-1}(y)$ is geometrically reduced (over $\kappa(y)$) at $x$; and the local ring $\mathcal{O}_{Y,y}$ is reduced. Then, $f$ is flat at the point $x$. This says that universally open morphisms that are not flat must have some non-reducedness somewhere, either in the fibers or on the base, thereby answering your Question 1, at least if you strengthen "open" to "universally open." There are surely examples of open but not universally open morphisms, too, though. Arrow asked two more questions in the comments. My intuition for a non-reduced base is that for the morphism $f$ to be flat, the morphism should look like a restriction of a nice family on a larger base space. So you can't expect flatness to be captured purely topologically. Depending on your taste, non-reduced fibers are less geometric, since this won't happen if $X$ is smooth and everything is over a field of characteristic $0$ by generic smoothness. Again depending on your taste, open but not universally open morphisms are less geometric, since they cannot happen if the base is reduced and normal [EGA IV$_{3}$, Cor. 14.4.9]. I find this MathOverflow question insightful. In particular, it mentions an example of an open morphism which is not universally open, hence not flat; there is another example [EGA IV$_{3}$, Rem. 14.3.9(i)] in the comments.
Proving $(\text{ ker } A)^\perp \subset \text{ image } A^T$
$$x\perp \text{image}(A^T)$$$$\iff\langle x,A^Tz\rangle =0,\forall z\in\Bbb R^m$$$$\iff z^T(Ax)=x^TA^Tz=0,\forall z\in \Bbb R^m$$ $$\bigg[\text{Being a real number }\langle x,A^Tz\rangle=x^TA^Tz \text{ is equals to its transpose }z^TAx\bigg]$$$$\iff Ax=0$$$$\iff x\in \text{ker}(A).$$ So, $$\big(\text{image}(A^T)\big)^\perp=\text{ker}(A)\iff\text{image}(A^T) =\bigg(\big(\text{image}(A^T)\big)^\perp\bigg)^\perp=\big(\text{ker}(A)\big)^\perp.$$ I have used the fact that, for any subspace $V$ of a finite dimensional inner product space we have, $(V^\perp)^\perp=V$.
Dense *-subalgebras of C*-algebras and their intersections with sub-C*-algebras
Take $A = L^{\infty}[0,1]$ with pointwise multiplication and let $B = C[0,1]$ be the closed $C^{\ast}$-subalgebra of continuous functions. The $\ast$-subalgebra $D \subset A$ consisting of the simple functions (finite linear combinations of characteristic functions of measurable sets) is dense in $L^{\infty}[0,1]$ but $D \cap B = \{\text{constant functions}\}$ is as far from dense in $B$ as it gets. Note that all $*$-algebras are commutative and unital here.
Topology Exercises Books
Fundamentals of General Topology: Problems and Exercises, Arkhangel'skii A., Ponomarev V. Elementary topology, problem textbook, Viro O., Ivanov O., Netsvetaev N., Kharlamov V. The topology problem solver, The Staff of REA
Bounded linear operator, strange definition.
The term "bounded" has a special meaning (the one you wrote down) when it comes to linear operators on topological vector spaces (like normed spaces). It's a common definition in Functional analysis. It is not equivalent to the usual notion of a bounded map. (And it's not really helping to say this that it's an unfortunate term, since it is widely used in the pertaining literature). See also here
Show that $x^{n-1}y^{n-1} = (yx)^{n-1}$ in a group G where $(xy)^n = x^ny^n$ for some fixed n.
It is indeed a simple trick! If $(xy)^n = x^n y^n$ then $xy (xy)^{n-2} xy = x^n y^n$ and so, erasing $x$ left and $y$ right $$y(xy)^{n-2}x = x^{n-1}y^{n-1}$$ Now just observe that $y(xy)^{n-2}x = (yx)^{n-1}$.
Determine which of the following functions are uniformly continuous on the open unit interval (0,1)
Well, $x \mapsto \sin x$ can be extended continuously to $[0,1]$, and therefore it is uniformly continuous on $(0,1)$. The same for $x \mapsto x^3$. The function in (a) has a vertical asymptote $x=1$, and it is very easy to check that it cannot be uniformy continuous. Formally, choose $x_n=1-\frac{1}{n}$ and $y_n=1-\frac{1}{n+1}$. Then $x_n \to 1$, $y_n \to 1$, but $$\frac{1}{1-x_n}-\frac{1}{1-y_n}=n-(n+1)=-1.$$ Try to understand what happens in case (b). More generally, it is a nice exercise to prove the following: let $f \colon [a,b) \to \mathbb{R}$ a continuous function. If $\lim_{x \to b-} f(x) = \pm \infty$, then $f$ can't be uniformly continuous. Hint: otherwise, there would exist a continuous extension $\tilde{f} \colon [a,b] \to \mathbb{R}$ of $f$, namely $\tilde{f}(x)=f(x)$ for every $x \in [a,b)$.
Stopping time for irreducible recurrent finite Markov chains in continuous time
I write $\Bbb E^x$ and $\Bbb P^x$ to indicate the initial condition $X_0=x$. Define $u(x):=\Bbb E^x\left[\int_0^\infty e^{-t} 1_{\{X_t=y\}}dt\right]$. Then $\delta:=\inf\{u(x):x\in E\}>0$. Choose $b>0$ so large that $e^{-b}<\delta/2$. Writing $T_y:=\inf\{t>0:X_t=y\}$, you have $$ \eqalign{ v(x) &\le\Bbb E^x\left[\int_0^b e^{-t}1_{\{X_t=y\}}dt\right]+e^{-b}\cr &=\Bbb E^x\left[\int_0^b e^{-t}1_{\{X_t=y\}}1_{\{T_y\le b\}}dt\right]+e^{-b}\cr &\le\Bbb E^x\left[\int_0^b e^{-t}1_{\{T_y\le b\}}dt\right]+e^{-b}\cr &=(1-e^{-b})\Bbb P^x\left[T_y\le b\right]+e^{-b}\cr &\le\Bbb P^x\left[T_y\le b\right]+e^{-b}\cr } $$ It follows that $$ \Bbb P^x[T_y>b]\le 1-\delta/2,\qquad\forall x. $$ Noting that $\{T_y>2b\} =\{T_y>b\}\cap\{\inf\{t>b:X_t=y\}>b\}$, and using the Markov property at time $b$: $$ \eqalign{ \Bbb P^x[T_y>2b] &=\Bbb E^x\left[ 1_{\{T_y>b\}}\Bbb P^{X_b}[T_y>b]\right]\cr &\le\Bbb E^x\left[ 1_{\{T_y>b\}}\cdot(1-\delta/2)\right]\cr &\le(1-\delta/2)^2. } $$ Repeating this you see that $$ \Bbb P^x[T_y>nb]\le(1-\delta/2)^n,\qquad n=1,2,\ldots, x\in E. $$ It follows easily from this that there is a constant $C>0$ such that $$ \Bbb P^x[T_y>t]\le Ce^{-\rho t},\qquad t>0, x\in E, $$ where $\rho:=-[\log(1-\delta/2)]/b$. This implies that $T_y$ has finite moments of all orders; indeed $$ \Bbb E^x[\exp(\alpha T_y)]<\infty,\qquad x\in E, $$ provided $\alpha<\rho$.
Show algebraically that every pair of distinct points of $\Bbb H$ lies on a unique $\Bbb H$-line.
Suppose that $(x_1,y_1)$ and $(x_2,y_2)$ are two points. If $x_1=x_2$ then there is a unique line of $L_1$ through these points, otherwise there are no lines of type $L_1$. If a line $(x-a)^2+y^2=c^2$ passes through our points, then for $a$ and $c$ we get the equations $$(x_1-a)^2+y_1^2=c^2,$$ $$(x_2-a)^2+y_2^2=c^2.$$ It means that $$(x_1-a)^2+y_1^2=(x_2-a)^2+y_2^2,$$ $$x_1^2-2ax_1+a^2+y_1^2=x_2^2-2ax_2+a^2+y_2^2,$$ $$2a(x_2-x_1)=x_2^2-x_1^2+y_2^2-y_1^2.$$ If $x_1=x_2$, then we get $y_2=y_1$, which contradicts that our points are different. Thus in this case we get no lines of type $L_2$ and the line of type $L_1$ through these points is unique. Otherwise this equation has a unique solution for $a$: $$a=\frac{x_2+x_1}{2}+\frac{y_2^2-y_1^2}{2(x_2-x_1)}.$$ And substituting this $a$ we get the unique $c^2$ - thus, by $c>0$, the unique $c$ - from any of our equations. This means that there is a unique line from $L_2$ (and as we seen, no lines from $L_1$) through our points in this case.
Equality between volume and surface
The isoperimetric inequality: for $V\subset\Bbb R^n$, the measures of the body and the hypersurface verify $$m_{n-1}(\partial V)\ge n\,m_n(V)^{(n-1)/n}m_n(B_1)^{1/n}$$ with $B_1$ the unit ball. Interesting integrals-related fact: The $n$-dimensional isoperimetric inequality is equivalent (for sufficiently smooth domains) to the Sobolev inequality on $\Bbb R^n$ with optimal constant. See Isoperimetric inequality EDIT: the coarea formula gives a relation between the measure of a body composed of level sets and the measures of the level sets. $$ \operatorname{Vol}(M) = \int_M d\operatorname{Vol}_M = \int_{-\infty}^\infty\frac1{|\nabla f|}\operatorname{area}(f^{-1}(t))\,dt $$ (copypaste from the link, you can translate easily the notation)
On group quotient by a characteristic subgroup
$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$Let $p$ be an odd prime, and $G$ be the group of order $p^{3}$ and exponent $p^{2}$, which is then given by the presentation $$ \Span{ a, b : a^{p^{2}} = b^p = 1, b^{-1} a b = a^{1+p}}. $$ Then $H = \Span{a^{p}} = Z(G)$ is a characteristic subgroup of order $p$ of $G$, $K = \Span{a^{p}, b}$ is a characteristic subgroup of $G$, as it consists of the elements of $G$ of order a divisor of $p$, and has order $p^{2}$, $H \le K$. But $G/H$ is elementary abelian of order $p^{2}$, and thus characteristically simple, so that $K/H$ (which has order $p$) is not characteristic in $G/H$. The point here is that not all automorphisms of $G/H$ lift to automorphisms of $G$.
Approximation of $\sin x$ in $L^2$-space
You are correct. So you are in fact trying to find the orthogonal projection of the function $f(x) = \sin x$ to the subspace $M = \operatorname{span} \{x\}$. The orthonormal basis for $M$ is $\left\{\frac{x}{\|x\|}\right\} = \left\{\frac{24}{\pi^3}x\right\}$ so the projection onto $M$ is given by $$P_Mu = \left\langle u, \frac{x}{\|x\|}\right\rangle \frac{x}{\|x\|} = \frac{\langle u, x\rangle}{\|x\|^2}\cdot x = \frac{\langle u, x\rangle}{\frac{\pi^3}{24}}\cdot x$$ In particular, $$P_Mf = \frac{\langle \sin x, x\rangle}{\frac{\pi^3}{24}}\cdot x= \frac{\pi^3}{24} \cdot x$$
Stochastic Processes Question
Here is an example related to your question. Consider some i.i.d. positive integer valued $(Y_x)_{x\geqslant1}$. Let $Z_0=0$ and $Z_x=Y_1+\cdots+Y_x$ for every $x\geqslant1$. For every $n\geqslant0$, let $X_n=\max\{x\geqslant0\mid Z_x\leqslant n+Z_{X_0}\}$. In words, $X_{n+1}$ is almost surely either $X_n$ or $X_n+1$, and the process $(X_n)_{n\geqslant0}$ stays at $x$ during $Y_x$ time steps. The following holds: The number of visits to $x\geqslant X_0$ is $Z_x$. The number of visits to $x\geqslant X_0$ is almost surely finite. The number of visits to $x\geqslant X_0$ is integrable iff $Z_x$ is integrable. The process $(X_n)_{n\geqslant0}$ is a Markov chain iff, for every $x\geqslant X_0$, $Z_x$ has a geometric distribution.
Orthogonal projections and being trace class
Consider $\mathcal{H}= \mathcal{K}\oplus \mathcal{K}$, where $\mathcal{K}$ is an arbitrary Hilbert space and let $P$ be the orthogonal projection onto the first component and $Q$ on the second component. Let $U: \mathcal{H}\to \mathcal{H}$ be the unitary that switches components. Then $$Q = UPU^* = UPU$$ Note further that $PQ = QP = 0$. Thus $B= P-Q$ is also a projection. However, if $\mathcal{K}$ has infinite-dimension, then $B$ does not have finite-dimensional range and thus $B$ is not trace-class by the observation you have made. Thus, in general, $B$ not not be trace-class.
How to derivative the linear equation of matrix
You have to use this way to get the derivative: \begin{eqnarray} D_wF(h)(w,x)&=&\frac{d}{dt}F(w+th,x)\bigg|_{t=0}\\ &=&\sum_{i=1}^{N}\int_{x \in \Omega} \frac{d}{dt}\left[(Y(x)-(w+th)^TA(x))^2\right]u_i(x)\bigg|_{t=0}dx\\ &=&-2\sum_{i=1}^{N}\int_{x \in \Omega} (Y(x)-w^TA(x))h^TA(x)u_i(x)dx. \end{eqnarray}
Integral operator is bounded on $L^p$ if it maps $L^p$ to itself
Let $A_x:L^p(\mu)\rightarrow L^1(\mu)$ be defined by $A_xf(y) = k(x,y)f(y)$ whenever $x$ is chosen such that $A_xf\in L^1$. We show that this map is bounded by applying the Closed Graph Theorem. Suppose that $f_n\rightarrow 0$ in $L^p$ and that $A_xf_n\rightarrow g$ in $L^1$. Now by taking a subsequence if necessary we can assume that $f_n\rightarrow 0$ $\mu$-almost everywhere but then $A_xf_n\rightarrow 0$ $\mu$-almost everywhere so that $g = 0$. By the Closed Graph Theorem we therefore conclude that there is a $C_x>0$ such that \begin{equation*} \int_{X}|k(x,y)f(y)|d\mu(y)\leq C_x\left(\int_{X}|f(y)|^pd\mu(y)\right)^{1/p} \end{equation*} for $\mu$-almost every $x\in X$. If we define a linear functional $l_x$ for every $x$ outside the exceptional set via $$l_x(f) = \int_{X}k(x,y)f(y)d\mu(y)$$ we find that this defines a continuous linear functional by the previous reasoning and therefore $l_x\in (l^p)^\ast$. Since the dual of $L^p$ is $L^q$ where $\frac{1}{q}+\frac{1}{p} = 1$ we can find a function $g_x\in L^q(\mu)$ such that $l_x(f) = \int_{X}g_x(y)f(y)d\mu(y)$ but this implies that \begin{equation*} \int_{X}k(x,y)f(y)d\mu(y) = \int_{X}g_x(y)f(y)d\mu(y) \end{equation*} for every $f\in L^p(\mu)$. This is only possible if $k(x,y) = g_x(y)$ for $\mu$-almost every $y\in X$ and $\mu$-almost every $x\in X$. Thus $y\mapsto k(x,y)$ belongs to $L^q$ for $\mu$-almost every $x$ in $X$.
Is there an analytical solution to this matrix equation?
Use the Spectral Theorem to write $A=PDP^T$ where $D$ is diagonal and $P$ is orthogonal. Notice that $D$ has strictly positive diagonal elements because $A$ is positive definite. A guaranteed minimum occurs if $$a=x^TPDP^Tx$$ Write $y=P^Tx$ and this becomes $a=y^TDy$. If the diagonal elements of $D$ are $\lambda_i$ for $i=1,\dots,d$ and $y=(y_1,\dots,y_d)$ then this is equivalent to $$a=\sum_{i=1}^d\lambda_i y_i^2 \tag{$*$}$$ When $a<0$, we see there can be no solution. When $a=0$, $y=0$ is the only solution. For both these cases, the best we can do is $y=x=0$. When $a>0$, we see there are plenty of solutions -- an ellipsoid of solutions! In other words, when $a>0$, what you can do is: $\quad(1)$: Calculate the eigenvalues $\lambda_i$ of $A$. $\quad(2)$: Calculate an orthonormal basis of eigenvectors for A, thus obtaining $P$. $\quad(3)$: Choose some $y$ that solves $(*)$. $\quad(4)$: Set $x=Py$ $($ remember that $P$ is orthogonal$)$.
How to generalize Poisson correction for multiple populations?
You’re overthinking this. You have two independent colours and you want to estimate how many marbles there are of each. Just apply the single-colour method to each of them. The estimator for the blue marbles is $\hat n_1=-m\ln(E+R)$, with $E+R$ the number of wells that are empty of blue marbles, and the estimator for the red marbles is $\hat n_2=-m\ln(E+B)$, with $E+B$ the number of wells that are empty of red marbles.
can polynomial functions with rational coefficients approximate any continuous function on $[a,b]$.
For sure it is true. For two polynomials $P_n,P_n^\prime$of the same degree $n$ with coefficients $p_0, \dots p_n$ and $p_0^\prime, \dots , p_n^\prime$ respectively, you have $$\Vert P_n -P_n^\prime\Vert_\infty =\sup\limits_{t \in [a,b]} \vert P_n(t)-P_n^\prime(t)\vert \le c \left(\sup\limits_{1\le i\le n}\vert p_i -p_i^\prime\vert\right)$$ with $c=\sup\limits_{t \in [a,b]} \sum_{k=0}^n \vert t \vert^k$. Hence if you have a real polynomial $P_n$ at a distance less than $\epsilon/2$ of $f$, you can find a polynomial of same degree with rational coefficients $P_n^\prime$ at a distance less then $\epsilon/2$ of $P_n$. And you can conclude with triangular inequality.
If $g_i\in K[Y_1,\dots,Y_m]$, is $(X_1-g_1,\dots,X_n-g_n)\cap k[Y_1,\dots,Y_m]=0$?
Yes. As is often the case, the easiest way to see this is to construct a homomorphism: there is a homomorphism $F:k[X_1,\dots,X_n,Y_1,\dots,Y_m]\to k[Y_1,\dots,Y_m]$ fixing $k[Y_1,\dots,Y_m]$ and mapping $X_i$ to $g_i$. Clearly $\ker F$ contains $(X_1-g_1,\dots,X_n-g_n)$ but does not contain any nonzero element of $k[Y_1,\dots,Y_m]$, and thus $(X_1-g_1,\dots,X_n-g_n)\cap k[Y_1,\dots,Y_m]=0.$
Prove by Induction that $n! > \left(\frac{n}{e}\right)^n$ for $n>1$
Notice that $$(k+1)!=(k+1)k!>(k+1)\left(\frac ke\right)^k>\frac{k+1}e\left(\frac{k+1}e\right)^k=\left(\frac{k+1}e\right)^{k+1}$$ The last inequality step uses the fact that $$k^k>\frac{(k+1)^k}e$$ or, $$e>\frac{(k+1)^k}{k^k}=\left(1+\frac1k\right)^k$$
How long does it take this object to arrive at the point $M$? Assume that $|KL| =|LM|$.
Use Newton's equations of motion $mg\sinθ=ma$ $⇒a=g\sinθ$ $v^2=2g\sinθ*h/\sinθ$ (velocity on point L). $v=g\sinθt_1$ $t_1=\dfrac{\sqrt{2gh}}{g\sinθ}$ $t=t_1+t_2=\dfrac{\sqrt{2gh}}{g\sinθ}+\dfrac{h}{\sqrt{2gh}\sinθ} =\dfrac{3\sqrt h}{\sqrt{2g}\sinθ}$ this solution holds on difficult situation which square-like object continue to slide with vertex from point L.
A set with non-$\sigma$-algebra monotone class
I don't think your solution is correct. In particular If $X_i \in \mathcal M$, then $\bigcup_{i = 1}^n X_i, \bigcap_{i = 1}^n X_i \in \mathcal M$ is going to be true (assuming $M = \mathcal P(X)$ is the power set). Yes the unions are countable, but all you need to show to get that the set is a $\sigma$-algebra is that countable intersections, unions are in the set, which is obviously true, given the definition of the power set.. Consider the following example: Let $X = \{a, b\}$ and let $\mathcal M = \big\{ \emptyset, \{a\}, \{a, b\} \big\}$. Observe that $\emptyset, X \in \mathcal M$. If $A_k \nearrow A$, it is either an eventually constant sequence, in which case we're done, or it would have to be a subsequence of $\emptyset \subseteq \{a\} \subseteq X$ each of whose limit is in $\mathcal M$. If $A_k \searrow A$, it is either an eventually constant sequence, in which case we're done, or it would have to be a subsequence of $X \supseteq \{a\} \supseteq \emptyset$, each of whose limits is in $\mathcal M$. Conclude that $\mathcal M$ is a monotone class that is not an algebra and hence not a $\sigma$-algebra. Notice it's not an algebra because it is not closed under complements: $X \setminus \{a\} = \{b\}$ is not in the set.
Finding 4th Moment of a Random Variable.
It's not a fourth moment it's actually a second (non-central) moment of the chi square. Up to a constant the sum $\sum(y_i^2)$ is a chi-squared random variable with $N$ degress of freedom. So up to a constant this is the second non-central moment of a chi square with $N$ degrees of freedom. Look here for the formula for the second non-central moment: https://en.wikipedia.org/wiki/Chi-squared_distribution#Noncentral_moments
A stronger theorem than Gershgorin theorem
Here is my solution using the canonical form for symmetry matrices. We prove by contradiction. Suppose there is an $i$ such that the $>$ inequalities hold for each eigenvalue. Let $Q^TAQ=D$ which is a diagonal matrix and $Q$ is orthogonal. Then for any $||x||=1$ we have $((D-a_{ii}I)x,(D-a_{ii}I)x)>\sum_{j\neq i}a_{ij}^2$. Using the canonical form above we have $((A-a_{ii}I)x,(A-a_{ii}I)x) >\sum_{j\neq i}a_{ij}^2$. Letting $x = e_i$ leads to the contradiction.
Solve trivariate linear Diophantine equation: $ 2x +3y +4z = 5 $
Hint: Rosen has $\begin{align} &\color{#c00}{(3,-2,0)}s\\ +\, &\color{#0a0}{(-2,0,1)}t\\ +\, &(-5,5,0)\end{align}$ vs. $\begin{align} &(3,-2,0)s\\ +\, &\color{#90f}{(4,-4,1)}z\\ +\,&(-5,5,0)\end{align}\,$ in yours But: $\ 2\color{#c00}{(3,-2,0)} + \color{#0a0}{(-2,0,1)}\ \ =\ \ \color{#90f}{(4,-4,1)}\,$ so they are equivalent, i.e. the span doesn't change by the elementary row operation of adding $3$ times the first row to the second, since that's an invertible operation. Remark $ $ See here for the general linear theory, and a formula / algorithm for the trivariate case.
Reverse of Second Order Derivatives in Hessian Matrix
After thinking for a while, I think I obtained the necessary answer. It does involve both Jacobian and Hessian matrices. I know the following: $$ x = \phi(\xi,\eta) $$ $$ y = \psi(\xi,\eta) $$ Obtaining the reverse of this is very cumbersome. The first and second derivatives are found as follows: $$ \frac{\partial (\cdot)}{\partial x} = \frac{\partial \xi}{\partial x} \frac{\partial (\cdot)}{\partial \xi} + \frac{\partial \eta}{\partial x} \frac{\partial (\cdot)}{\partial \eta} $$ $$ \frac{\partial (\cdot)}{\partial y} = \frac{\partial \xi}{\partial y} \frac{\partial (\cdot)}{\partial \xi} + \frac{\partial \eta}{\partial y} \frac{\partial (\cdot)}{\partial \eta} $$ $$ \frac{\partial^2 (\cdot)}{\partial x^2} = \frac{\partial^2 \xi}{\partial x^2} \frac{\partial (\cdot)}{\partial \xi} + \frac{\partial^2 \eta}{\partial x^2} \frac{\partial (\cdot)}{\partial \eta} + \left(\frac{\partial \xi}{\partial x}\right)^2 \frac{\partial^2 (\cdot)}{\partial \xi^2} + \left(\frac{\partial \eta}{\partial x}\right)^2 \frac{\partial^2 (\cdot)}{\partial \eta^2} + 2 \left(\frac{\partial \xi}{\partial x}\right)\left(\frac{\partial \eta}{\partial x}\right) \frac{\partial^2 (\cdot)}{\partial \xi \partial \eta} $$ $$ \frac{\partial^2 (\cdot)}{\partial y^2} = \frac{\partial^2 \xi}{\partial y^2} \frac{\partial (\cdot)}{\partial \xi} + \frac{\partial^2 \eta}{\partial y^2} \frac{\partial (\cdot)}{\partial \eta} + \left(\frac{\partial \xi}{\partial y}\right)^2 \frac{\partial^2 (\cdot)}{\partial \xi^2} + \left(\frac{\partial \eta}{\partial y}\right)^2 \frac{\partial^2 (\cdot)}{\partial \eta^2} + 2 \left(\frac{\partial \xi}{\partial y}\right)\left(\frac{\partial \eta}{\partial y}\right) \frac{\partial^2 (\cdot)}{\partial \xi \partial \eta} $$ $$ \frac{\partial^2 (\cdot)}{\partial x \partial y} = \frac{\partial^2 \xi}{\partial x \partial y} \frac{\partial (\cdot)}{\partial \xi} + \frac{\partial^2 \eta}{\partial x \partial y} \frac{\partial (\cdot)}{\partial \eta} + \left(\frac{\partial \xi}{\partial x}\frac{\partial \xi}{\partial y}\right) \frac{\partial^2 (\cdot)}{\partial \xi^2} + \left(\frac{\partial \eta}{\partial x}\frac{\partial \eta}{\partial y}\right) \frac{\partial^2 (\cdot)}{\partial \eta^2} + \left(\frac{\partial \xi}{\partial x} \frac{\partial \eta}{\partial y} + \frac{\partial \xi}{\partial y} \frac{\partial \eta}{\partial x} \right) \frac{\partial^2 (\cdot)}{\partial \xi \partial \eta} $$ To save space, I will denote derivatives as subscripts. If we construct matrix to relate derivatives in $x,y$ to $\xi,\eta$: $$ \left[ \matrix{\partial_x \\ \partial_y \\ \partial_{xx} \\ \partial_{yy} \\ \partial_{xy} }\right] = \left[ \matrix{ \xi_x && \eta_x && 0 && 0 && 0 \\ \xi_y && \eta_y && 0 && 0 && 0 \\ \xi_{xx} && \eta_{xx} && \xi_x^2 && \eta_x^2 && 2 \xi_x \eta_x \\ \xi_{yy} && \eta_{yy} && \xi_y^2 && \eta_y^2 && 2 \xi_y \eta_y \\ \xi_{xy} && \eta_{xy} && \xi_x \xi_y && \eta_x \eta_y && \xi_x \eta_y + \xi_y \eta_x}\right] \left[ \matrix{\partial_\xi \\ \partial_\eta \\ \partial_{\xi\xi} \\ \partial_{\eta\eta} \\ \partial_{\xi\eta} }\right] $$ Here, by making analogy with Jacobian matrix to relate first-order derivatives to each other, the derived matrix relates both first-order and second-order derivatives to each other. Since I explicitly know the followings: $$ x_\xi, x_\eta, x_{\xi \xi}, x_{\eta \eta}, x_{\xi \eta}, y_\xi, y_\eta, y_{\xi \xi}, y_{\eta \eta}, y_{\xi \eta} $$ It means that I know the inverse relation. Calculating the inverse matrix, then taking the inverse of it, would yield the values I am seeking for which is the matrix above.