title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Degree of splitting field of $X^4+2X^2+2$ over $\mathbf{Q}$
You're off to a good start. The splitting field $\Omega$ contains $\sqrt[4]{2}e^{\tfrac a8\pi i}$ for $a\in\{\pm3,\pm5\}$ and hence also $$\frac{\sqrt[4]{2}e^{\tfrac 58\pi i}}{\sqrt[4]{2}e^{\tfrac 38\pi i}}=e^{\tfrac14\pi i}=\zeta_8,$$ which shows that $\Bbb{Q}(\zeta_8)\subset\Omega$. Over this subfield we already have $$X^4+2X^2+2=(X^2-(-1+i))(X^2-(-1-i)),$$ where $-1-i=\zeta_8^2(-1+i)$. So if $\alpha\in\Omega$ is a root of $X^2-(-1+i)$ then $$(\zeta_8\alpha)^2-(-1-i)=\zeta_8^2(\alpha^2-(-1+i))=0,$$ i.e. $\zeta_8\alpha$ is a root of $X^2-(-1-i)$. This shows that $\Omega$ is the splitting field of $X^2-(-1+i)$ over $\Bbb{Q}(\zeta_8)$. It now suffices to check that $-1+i$ is not a square in $\Bbb{Q}(\zeta_8)$ to conclude that $[\Omega:\Bbb{Q}]=8$. Edit: In response to the comment below; an excellent hands-on proof of the fact that $-1+i$ is not a square in $\Bbb{Q}(\zeta_8)$ has already been given in another answer. So here's a less constructive proof: Suppose $-1+i$ is a square in $\Bbb{Q}(\zeta_8)$. Then it is a square in its ring of integers $\Bbb{Z}[\zeta_8]$, which is a unique factorization domain, and the factorization $$-1+i =-(1-\zeta_8)(1+\zeta_8) =\zeta_8^2\tfrac{1+\zeta_8}{1-\zeta_8}(1-\zeta_8)^2,$$ shows that the unit $\tfrac{1+\zeta_8}{1-\zeta_8}\in\Bbb{Z}[\zeta_8]$ is then also a square, where $\tfrac{1+\zeta_8}{1-\zeta_8}=\zeta_8(1+\zeta_8+\zeta_8^2)$. Because $\Bbb{Z}[\zeta_8]$ is a UFD, every unit in $\Bbb{Z}[\zeta_8]$ is a cyclotomic unit, i.e. of the form $$\zeta_8^a\big(\tfrac{1-\zeta_8^3}{1-\zeta_8}\big)^b =\zeta_8^a(1+\zeta_8+\zeta_8^2)^b,$$ for unique $a\in\Bbb{Z}/8\Bbb{Z}$ and $b\in\Bbb{Z}$. This shows that $\zeta_8(1+\zeta_8+\zeta_8^2)$ is not a square.
Boxplots and Quartiles
Data points may well be beyond the whiskers of the plot. The box itself is drawn from the value of the lower quartile to the upper quartile. The median is marked within the box. The extent of the whiskers however is normally given by the least and greatest values of the data within 1.5 times the inter-quartile range from the lower and upper quartile respectively. The whiskers do not normally extend to the least and greatest values of the data, which is probably what you are expecting.
solve $x=e^{a\frac{\ln(b/x)}{\ln(b/x)+c}}$ for $x$
Hint. Since the RHS is an exponential, necessarily the LHS $x$ should be positive. Moreover the term at the denominator, $\ln(b/x)+c$, should be different from zero. Then, taking the natural logarithm of both sides we get $$\ln(x)=a\frac{\ln(b/x)}{\ln(b/x)+c}.$$ Now recall that $\ln(b/x)=\ln(b)-\ln(x)$ (here $b>0$). Can you take it from here?
Generalisation Rule using Hilbert's Calculus
Hint $\varphi \to \exists x_1 \varphi$ is an axiom; re-iterate it $n$ times. Regarding the universal quantifier, apply the previous case to $\lnot \varphi \to \exists x_1 \lnot \varphi$ abd contrapose it, to get; $\lnot \exists x_1 \lnot \varphi \to \varphi$. Then apply the definition of $\forall$ as $\lnot \exists \lnot$.
Equality between support for a function and closed union of elements of a partition of unity (Proof from John Lee's Smooth Manifolds)?
I agree that this is a mistake in the text (that doesn't affect the truth of the lemma, as you noted). For instance, if you start with the constant function $f=0$ and apply the construction, you get $\operatorname{supp}\widetilde{f}=\emptyset$, which can't be equal to the right side unless $A =\emptyset$ as well.
A question about a form that is related to modular form
I suppose that $B=2$ .... The space of holomorphic function on the upper halfplane which are holomorphic at $\infty$ and satisfying $f(z+1)=f(z)$ is isomorphic to the space of holomorphic function on the open unit disc $D$. This is so because there's a biholomorphic isomorphism ${\cal H}/{\Bbb Z}\simeq D\setminus\{0\}$ and the condition for $f$ to be holomorphic at $\infty$ is exactly that assuring that the corresponding function extends holomorphically to $0$. If, instead, $f(z+1)=f(z)$ is the dropped relation, note that $z\mapsto-\frac1z$ is an involution of $\cal H$ with $z=i$ as unique fixed point. The shape of the functional equation tells immediately that $k$ must be even (for $f$ to be non-trivial) and the holomorphic functions are in one-to-one correspondence with the holomorphic functions on the quotient space $\cal H/\sim$ with the extra condition that they must vanish at $\bar\imath$ if $k\equiv2\bmod4$.
Combinatorial and probability question
Thanks to the comments, the second part of my answer is wrong, due to the fact that putting a ball into a box will then influence the probabilities of the remaining boxes having or not having a ball inside. The probability of the first ball being in a box is 6/10 (e.g. only 6 out of the 10 valid configurations will have box a being empty). However, this then means that out of these valid 6 configurations, only 3 will be valid for a second box (e.g. b) being empty, hence the final probability of 2 boxes being empty is ${6\over10}\times{3\over6}={3\over10}$.
Differential form volume
What you have observed is Green's theorem in the plane can be used to write down a formula for the integral over any region bounded by a simple closed curve. The general case requires generalizing the concepts of vector calculus that lead to that formula, because essentially what you want to do is be able to say something like: $$ \int_{\text{Region}} 1 dV = \int_{\text{boundary of that Region}} \text{Something } dS$$ The question is essentially how you should find that something. The fundamental theorem of calculus has the following generalization (the generalized Stoke's Theorem) for differential forms. $$\int_M \ d \omega = \int_{\partial M}\omega$$ where $\omega$ is now a differential form and $M$ a manifold. So the answer to your question then boils down to figuring out how to find a differential form $\omega$ so that $d\omega$ is the volume form. How to find these on manifolds can be difficult, if not impossible, as there is no guarantee the volume form actually is exact (in fact on compact manifolds without boundary it is never exact, by Stoke's theorem), but on $\mathbb{R}^n$ this is not so hard, as the formula for $d$ is very simple, and the volume form has a very easy to write down expression as $dx^1 \wedge ... \wedge dx^n$. In particular, if $\omega$ is a form, it can be represented on Euclidean space as $\sum u_I dx^I$, where $I$ is a multi-index. The expression for the exterior derivative is just $$\sum_{k=1}^n\frac{\partial }{\partial x^k}\bigg(\sum u_I dx^I \bigg) \wedge dx^k$$ Then we can at least see from direct calculation that the form $$\omega = \sum_{k=1}^n (-1)^{k+1} x^k dx^1 \wedge ... \wedge \overline{dx^k} \wedge ... \wedge dx^n$$ will have the desired property, as taking $d$ term by term inserts the missing partial and is $0$ when terms are repeated.
Differential Topology
First think about why it's true in the $n=1$ case. A linear function is $f(x)=ax$. The derivative at any point is $f'(x)=a$. You can think of $a$ as a $1\times 1$ matrix satisfying $\lim_{h\to 0}\frac{f(x+h)-f(x)-a\cdot h}{h}=0$. The case for higher dimensions is analogous. Suppose $f(x)=Ax:\mathbb{R}^n\to\mathbb{R}^n$ is linear and represented by the matrix $A$. Fix $x_0\in\mathbb{R}^n$. Then to find the differential at $x_0$, we need to find a matrix $D$ satisfying $$\lim_{h \rightarrow 0} \frac{\|A(x_0+h)-Ax - Dh\|}{\|h\|} = 0$$ where $h$ is also a vector in $\mathbb{R}^n$. But choosing $D=A$, this simplifies to $$\lim_{h \rightarrow 0} \frac{\|A(x_0+h)-Ax_0 - Ah\|}{\|h\|} $$ $$\lim_{h \rightarrow 0} \frac{\|Ax_0+Ah-Ax_0 - Ah\|}{\|h\|}$$ $$\lim_{h \rightarrow 0} \frac{\|0\|}{\|h\|}$$ $$=0$$ So $A$ is the differential at $x_0$. Since $x_0$ was arbitrary, $A$ must be the differential at every point. Another way to see this is to realize that the differential is just the matrix of partial derivatives. For simplicity I'll use the $2\times 2$ case. If the matrix of $f$ is $$A=\begin{bmatrix}a&b\\c&d\end{bmatrix}$$ Then $f$ can be written $$f\binom{x}{y} = \binom{f_1(x,y)}{f_2(x,y)} = \binom{ax+by}{cx+dy}$$ Computing the matrix of partial derivatives at any point gives $$D=\begin{bmatrix}\frac{\partial f_1}{\partial x}&\frac{\partial f_1}{\partial y}\\\frac{\partial f_2}{\partial x}&\frac{\partial f_2}{\partial y}\end{bmatrix} = \begin{bmatrix}a&b\\c&d\end{bmatrix} = A$$ Which again shows that $A$ is the differential at every point. If $g:U\to\mathbb{R}^n$ is the inclusion, then locally the map is just $$g\begin{pmatrix}x_1\\\vdots\\x_n\end{pmatrix}=\begin{pmatrix}g_1(x_1,...,x_n)\\\vdots\\g_n(x_1,...,x_n)\end{pmatrix}=\begin{pmatrix}x_1\\\vdots\\x_n\end{pmatrix}$$ This is a linear function represented by the matrix $I_n$, so by the argument above the differential is also $I_n$. Alternatively, computing the partials at any point gives $$D=\begin{pmatrix} \frac{\partial g_1}{\partial x_1} & \cdots & \frac{\partial g_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \frac{\partial g_n}{\partial x_1} & \cdots & \frac{\partial g_n}{\partial x_n} \\ \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end{pmatrix} = I_n$$ Part of the confusion arises from the fact that in the $1\times 1$ case $f:\mathbb{R}\to\mathbb{R}$, the derivative (or differential) is two things: at each point it's the linear transformation $d$ that satisfies $\lim_{h\to 0}\frac{f(x+h)-f(x)-d\cdot h}{h}=0$; in this case $d$ is just a $1\times 1$ matrix AKA a real number, and thus $f'(x)$ is also a function $\mathbb{R}\to\mathbb{R}$. The analogous statement in $\mathbb{R}^n$ is that the differential is an $n\times n$ matrix at each point and thus a function $f'(x):\mathbb{R}^n\to\mathbb{R}^{n^2}$. For linear maps $f$, $f'$ is a constant function whose constant value is the matrix representing $f$.
Uniform convergence and Integrals
(i) is incorrect: you did not understand how to prove uniform convergence. $$ M_n = \sup_x \frac{|x|}{1 + nx^2} $$ You want to prove that this goes to 0 when $n\to\infty$. We have : $$ \frac{|x|}{1 + nx^2} \le \frac{|x|}{ nx^2} = \frac 1 {n|x|} $$but it is not small when $x$ is close to 0. Let $r>0$. For $x$ small, the numerator is small: $$ \frac{|x|}{1 + nx^2} \le |x| \le r $$ when $|x| \le r$. Otherwise, $$ \frac{|x|}{1 + nx^2} \le \frac 1 {n|x|} \le \frac 1 {nr} \le \frac 1 r $$ as soon as $n> \frac 1 {r^2}$. $$ n> \frac 1 {r^2} \Rightarrow M_n \le r $$ that is $$M_n\to_{n\to\infty} 0 $$ (ii) is correct. (iii) is not, there is a mistake in your computation (where is $n$?). See the edit by OP in the comments for the solution.
Let $f:[0,1]\to\mathbb R$ be continuous such that $f(t)\geq 0$ for all $t$ in $[0,1]$. What can be said about $g(x):=\int_0^x f(t)\,dt$?
If $x\in[0,1]$, you're ok with $(a)$, yes. Since $f$ is continuous over $[0,1]$ it is bounded there, whence the integral cannot be unbounded over $0\leq x \leq 1$. Note that $g'=f$, and $f\geq 0$. What does this tell you about how $g$ grows?
Principal Component Analysis on a Dataset
The columns are not the only possible directions that PCA will "search" over. There are also linear combinations of columns that might explain a large portion of the variance compared to any single column.
Show a multi-variable function is not continuous at $(0,0)$
Hint: What happens with $f(t,t^2)$ as $t\to 0$?
Difficulties with partial integration
I've never properly learned the notations with du and dx and dy/dx and such. Does du mean u'? If you have a function $u(x)$ of the single variable $x$, the differential $du$ can be seen as the product of the derivative of $u(x)$ with the differential $dx$ of the independent variable, i.e. $du=u'(x)\ dx$. For a detailed explanation of the notation see this answer. -- The integration by parts corresponds to the following rule: $$ \begin{equation*} \int u(x)v^{\prime }(x)\ dx=u(x)v(x)-\int u^{\prime }(x)v(x)\ dx. \end{equation*} $$ We can select the functions $u(x),v(x)$ by using the LIATE rule as in my answer to your second last question or the techniques explained in the answers to your last question LIATE / ILATE rule. We get$^1$: $$\int e^{2x}\sin x\,dx=\frac{1}{2}e^{2x}\sin x-\int \frac{1}{2}e^{2x}\cos x\,dx$$ and $$\int \frac{1}{2}e^{2x}\cos x\,dx=\frac{1}{4}e^{2x}\cos x+\frac{1}{4}\int e^{2x}\sin x\,dx.$$ Consequently, $$\begin{eqnarray*} I &=&\int e^{2x}\sin x\,dx=\frac{1}{2}e^{2x}\sin x-\int \frac{1}{2} e^{2x}\cos x\,dx \\ &=&\frac{1}{2}e^{2x}\sin x-\frac{1}{4}e^{2x}\cos x-\frac{1}{4}\int e^{2x}\sin x\,dx \\ &=&\frac{1}{2}e^{2x}\sin x-\frac{1}{4}e^{2x}\cos x-\frac{1}{4}I. \end{eqnarray*} $$ Solving for $I$ we thus get $$ \begin{eqnarray*} \left( 1+\frac{1}{4}\right) I &=&\frac{1}{2}e^{2x}\sin x-\frac{1}{4} e^{2x}\cos x \\ I &=&\frac{4}{5}\left( \frac{1}{2}\sin x-\frac{1}{4}\cos x\right) e^{2x}. \end{eqnarray*} $$ -- $^1$ The first integral can be evaluated as follows. If $u(x)=\sin x$ and $v^{\prime }(x)=e^{2x}$, then $u^{\prime }(x)=\cos x$ and $v(x)=\frac{1}{2}e^{2x}$. The integration by parts yields $$\int \underset{u(x)}{\underbrace{\sin x}}\,\cdot\underset{v^{\prime }(x)}{\underbrace{e^{2x}}}\ dx=\underset{u(x)}{\underbrace{\sin x}}\,\cdot\underset{v(x)}{\underbrace{\frac{1}{2}e^{2x}}}-\int \underset{u^{\prime }(x)}{\underbrace{\cos x}}\cdot\underset{v(x)}{\,\underbrace{\frac{1}{2}e^{2x}}}\ dx.$$ Remark. As can be seen in AWertheim's answer the opposite selection $u(x)=e^{2x}$ and $v^{\prime }(x)=\sin x$ works too.
Did I solve this line integral correctly?
I was bored and checked your algebra, which looks right. Since I'm very prone to algebra misstakes myself, I checked you result numerically, and it indeed seems right: http://pastebin.com/wSinrBg3 (compare lines 9 and 14). [This is using the CAS "SymPy".] PS: In all honesty I don't think this is a terribly good question, since what you ask for is quite boring (In the sense that checking someone elses tedious algebra is). I guess you got lucky ^^.
Prove that for $x\in\Bbb{R}$, $|x|\lt 3\implies |x^2-2x-15|\lt 8|x+3|$.
We know that $|x|<3$. Therefore $-3<x<3$, subtracting $5$ from each side gives: $-8<x-5<-2$ And so: $|x-5|<8$ This implies that: $|x^2-2x-15|=|(x-5)(x+3)|=|x-5|\cdot |x+3|<8|x+3|$ As needed.
$\epsilon - \delta$-proof of $\lim_{x\to+\infty}\frac{x}{a^x}=0$
Option: $a^x=e^{x\log a}$, where $\log a>0$. $f(x):= e^{x\log a}>$ $1+x\log a + (x^2(\log a)^2)/2!.$ $\dfrac{x}{a^x} =\dfrac{x}{e^{x \log a}}< \dfrac{2x}{x^2(\log a)^2}=$ $ \dfrac{2}{x(\log a)^2}= b(1/x)$, where $b=2/(\log a)^2.$ Let $\epsilon >0$ be given. Let $M>0$, real, be such that $M >(b/\epsilon).$ Then $x >M$ implies $f(x) <b\dfrac{1}{x}<b(1/M)<\epsilon$.
Let $\{a_n\}$ be a sequence of positive real numbers such that $\sum_{n=1}^\infty a_n$ is divergent. Which of the following series are convergent?
For (b) the series could diverge as you showed or converge as with $$a_n = \begin{cases} 1, & n = m^2 \\ \frac{1}{n^2}, & \text{otherwise}\end{cases}$$ since $$\sum_{n= 1}^N \frac{a_n}{1+na_n} = \sum_{n \neq m^2} \frac{a_n}{1+na_n} + \sum_{n = m^2} \frac{a_n}{1+na_n} \\ \leqslant \sum_{n= 1}^N \frac{1}{n + n^2}+ \sum_{n= 1}^N \frac{1}{1+n^2}$$ For (a) the series always diverges. Consider cases where $a_n$ is bounded and unbounded. If $a_n < B$ then $a_n/(1 + a_n) > a_n/(1+B)$ and we have divergence by the comparison test. Try to examine the second case where $a_n$ is unbounded yourself. Hint: There is a subsequence $a_{n_k} \to \infty$
Latin Square Problem: Indempotent Commutative Quasigroup of Order 7
These tables are called Cayley tables. For binary operations on finite sets (usually groups, but it applies to quasigroups and other weaker structures), we can essentially develop a full multiplication table, in exactly the way you'd expect. For example, take the group $\mathbb{Z}_2 \times \mathbb{Z}_2$, under addition. It has exactly $4$ elements: $(0, 0)$, $(1, 0)$, $(0, 1)$, and $(1, 1)$. We can form the Cayley table like so: \begin{matrix} & \color{red}{(0, 0)} & \color{red}{(1, 0)} & \color{red}{(0, 1)} & \color{red}{(1, 1)} \\ \color{red}{(0, 0)} & (0, 0) & (1, 0) & (0, 1) & (1, 1) \\ \color{red}{(1, 0)} & (1, 0) & (0, 0) & (1, 1) & (0, 1) \\ \color{red}{(0, 1)} & (0, 1) & (1, 1) & (0, 0) & (1, 0) \\ \color{red}{(1, 1)} & (1, 1) & \color{green}{(0, 1)} & (1, 0) & (0, 0) \\ \end{matrix} Each entry in the table represents the sum of the red column head and the red row head. For example, the highlighted green entry represents the fact that $$(1, 1) + (1, 0) = (0, 1).$$ Cayley tables have the advantage that certain group properties are easy to spot. For example, the group is commutative, since the table is symmetric about the main diagonal. We can also see that $(0, 0)$ is the group's identity, since the row/column beside/under $\color{red}{(0, 0)}$ is identical to the row/column of table headers. That is, we get the property that $a + (0, 0) = (0, 0) + a = a$ for any $a$ from the four group elements. Moreover, we can see that this operation has inverses (like any group should), because $(0, 0)$ appears in every row and column (and since we get $(0, 0)$ down the main diagonal, we have $a = -a$ for every $a$). Notice that the structure is a Latin square: every element of $\mathbb{Z}_2 \times \mathbb{Z}_2$ occurs exactly once in every row and column (the row/column heads don't count). If you're trying to find a given element $b$ in the row for $a$, you're solving the following equation for $x$: $$a + x = b.$$ In a group, there is always exactly one solution to this equation. Similarly, if you're solving for columns, $$x + a = b.$$ There's no difference if the group is commutative (i.e. when the rows match the columns), but this isn't always the case. For example, if you're looking for $\color{green}{(0, 1)}$ in the $\color{red}{(1, 1)}$ row, the table shows you that you want the entry in the $\color{red}{(1, 0)}$ column. That is, the unique solution to $\color{red}{(1, 1)} + x = \color{green}{(0, 1)}$ is $x = \color{red}{(1, 0)}$. So, you can take finite groups, find their Cayley tables, and get a whole bunch of Latin squares. Can you go the other way? Can you begin with a Latin square and find a Cayley table for a group to match it? The answer is "no", in general. The notion of group is too strong. But, you can always take the set of entries in your Latin square (typically $\lbrace 1, \ldots, n\rbrace$) and use your Latin square to define an operation on the set (call it $\circ$). The Latin square structure, i.e. every entry is found in exactly one row or column, corresponds to unique solutions for equations of the form: $$\begin{align*} a \circ x &= b, \\ x \circ a &= b. \end{align*}$$ When a binary operation yields unique solutions to the above equations, regardless of the values of $a$ and $b$, we get a quasigroup. Conversely, the Cayley table of any quasigroup is a Latin square. Let's do an example. Take the following $5 \times 5$ Latin square: $$\begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 2 & 4 & 1 & 5 & 3 \\ 3 & 5 & 4 & 2 & 1 \\ 4 & 1 & 5 & 3 & 2 \\ 5 & 3 & 2 & 1 & 4\end{pmatrix}.$$ We define an operation on $\lbrace 1, 2, 3, 4, 5 \rbrace$ by putting this matrix into a Cayley table: $$\begin{matrix} & \color{red}{1} & \color{red}{2} & \color{red}{3} & \color{red}{4} & \color{red}{5} \\ \color{red}{1} & 1 & 2 & 3 & 4 & 5 \\ \color{red}{2} & 2 & 4 & 1 & 5 & 3 \\ \color{red}{3} & 3 & 5 & 4 & 2 & 1 \\ \color{red}{4} & 4 & 1 & 5 & 3 & 2 \\ \color{red}{5} & 5 & 3 & 2 & 1 & 4\end{matrix}.$$ This defines our operation. For instance, if I wanted to know the value of $3 \circ 2$, I would look at my table in the $\color{red}{3}$ row and $\color{red}{2}$ column, to find the number $5$. So, $3 \circ 2 = 5$. Such a structure is a quasigroup. (Note, however, that it's not commutative, since $2 \circ 3 = 1 \neq 3 \circ 2$.) Your problem is finding an order $7$ idempotent (meaning $x \circ x = x$, for all $x$) commutative quasigroup. In terms of Latin squares, you need to find a Latin square (quasigroup) that is symmetric about the main diagonal (commutative), and contains $1, 2, \ldots, 7$ down the main diagonal (idempotent). Good luck!
The reflective property of ellipses
Figure 1: Start with a line $l$ and and two arbitrary points $A$ and $B$ on the same side of the line. Figure 2: Suppose line $l$ is actually a mirror. By Fermat's principle we know that light always follows the shortest path. Therefore of all the possible paths from $A$ to some arbitrary point $X$ along $l$, to $B$, the one that light will follow is that which minimizes distance $AXB$ Figure 3: To find the path minimizing $AXB$, reflect point $A$ about line $l$ (call this reflection $A'$). Note that by construction $\triangle AXA'$ is an isosceles triangle. It follows that distance $AXB$ equals distance $A'XB$. Since this is true, and since the shortest distance between two points is a straight line, distance $AXB$ is minimized when point $X$ takes the position along line $l$ such that $A'XB$ is a straight line. This position is indicated in figure 3 by point $C$. Since $\triangle ACA'$ is by construction isosceles, it is not hard to see that (by the properties of isosceles triangles and interior opposite angles) the angles indicated in figure 3 are equal. That is, the angle of the light ray's incidence equals the angle of its reflection off the mirror. This what is referred to as the reflective or optical or specular property of light rays. Figure 4: Finally, to see that the ellipse also has the reflective property, suppose you are told that line $l$ is tangent to an ellipse with focii $A$ and $B$, but you are not given the point where $l$ actually touches the ellipse. To find this point of tangency the process is the same as for finding the distance-minimizing path of light. Clearly (by the definition of tangents), for any arbitrary point $X$ along $l$, with $X \neq C$, the inequality $AXB>ACB$ holds. Thus it is evident that the distance $AXB$ will be minimized at the point of tangency $C$. We have just seen that this is exactly the optical or reflective property of light. Therefore the indicated angles are equal and any ray shot from one focus to the ellipse perimeter will end up bouncing to the other focus.
Reference about conformal map
Invariant Operators on Conformal Manifolds by Jan Slovák is available online: http://www.math.muni.cz/~slovak/ftp/papers/vienna.ps
Hypothesis testing for a small sample
Your approach using the T-test does not seem correct to me, since the underlying assumption is that the data is drawn from a normal distribution with unknown variance. The data consists of paired observation (one before and one after the special classes, for each student), and is discrete (i.e. values seem to be in (a subset of) $\{0,1,2,\ldots50\}$). Using a non-parametric test seems the best thing to do, since these can be used under minimal assumptions. I think the right test to use is the Wilcoxon signed rank sum test, altough extra assumptions on the data may imply another test, i.e. if you assume normality, a T-test might be the right test. If you want to know how to conduct the Wilcoxon signed rank sum test (and the T-test), use Google to look for examples.
Prove that $S(m,n)$ is true for all $m \ge 1$ and $n \ge 1$.
Your proof looks valid. It is okay to use two contradictions in your proof, because you are actually proving two things via two separate proofs by contradiction. You are first proving that $S(m,1)$ is true for all $m$ via a proof by contradiction. Then you use this result to prove the desired result using another proof by contradiction.
Which smoothness properties are preserved under ramified covering maps?
Partial answer: Part 1 is true, and follows from the answer to part 3 of this MO post: https://mathoverflow.net/questions/65264/integral-representation-of-higher-order-derivatives. For smoothness of $f$ at a branch point $\pi(p)$, the uniformization theorem allows us to reduce to the case of $\pi : B(0, 1) \to B(0, 1)$ given by $z \mapsto z^p$ and $\Gamma = \mathbb Z/p$ with the action given by multiplication by $p$th roots of unity. A special case of a theorem of G. Schwartz (Smooth functions invariant under the action of a compact lie group, Topology 14, 1975) implies that the $\mathbb Z/p$-invariant smooth functions $B(0, 1) \to \mathbb C$ are given by smooth functions in the invariant polynomials for this action. In this case, they are $Re(z^p)$ and $Im(z^p)$. Hence $f$ is a smooth function of $z^p$. For part 2, we can reduce in the same way to a Riemannian metric on $B(0, 1)$ and $\pi(z) = z^n$. We may look at the metric as a smooth map $g : B(0, 1) \to SPD_2(\mathbb R)$ that takes values in (wlog) positive definite symmetric matrices. The invariance under $\Gamma$ means that for any $k \in \mathbb Z$ and $z \in B(0, 1)$, $$\zeta_p^{-k} g(\zeta_p z) \zeta_p^{k} = g(z) \tag{1}$$ where we identify a complex number $x+iy$ with its differential, $$\begin{pmatrix} x & -y \\ y & x \end{pmatrix}$$ The question is now equivalent to whether equation $(1)$ implies that $g(z)$ is a smooth function of $z^p$. The same result of Schwartz tells us that the trace and determinant of $g(z)$ are smooth functions of $z^p$, but this is not enough to conclude.
Familiar spaces in which every one point set is $G_\delta$ but space is not first countable
Another example (perhaps a bit more familiar with readers of Munkres's text) is $\mathbb R^\omega$ with the box topology.* It is not first countable because you can "diagonalise" through any countable collection of open neighbourhoods of a point. Given $\mathbf x = ( x_n )_n $ and a collection $\{ U_i : i \in \mathbb N \}$ is open neighborhoods of $\mathbf x$, without loss of generality we may assume that $U_i = \prod_n ( a_n^{(i)} , b_n^{(i)} )$ where $a_n^{(i)} < x_n < b_n^{(i)}$. Taking $c_n = \frac{a_n^{(n)} + x_n }{2}$ and $d_n = \frac{x_n + b_n^{(n)}}{2}$ it follows that $V = \prod_n ( c_n , d_n )$ is an open neighbourhood of $\mathbf x$, but $U_n \not\subseteq V$ for each $n$. It is pretty easy to verify that points are Gδ. (Given $\mathbf x = ( x_n )_n$, set $U_i = \prod_n ( x_n - \frac{1}{i} , x_n + \frac{1}{i} )$ for each $i$, and note that $\bigcap_i U_i = \{ \mathbf x \}$.) *This space is first explicitly mentioned on p.117 of Munkres's text, and has a separate index entry. Its non-metrizability is shown on p.132 and its disconnectedness is shown on p.151, both before the stated exercise. It is also the subject of several exercises prior to Section 30. To someone going through the text, it should be "familiar".
Given $W=ULV^T$ and a vector $\mathbf{x}$, can we compute $UL^kV^T\mathbf{x}$ without doing the SVD, for any integer k?
I think an important observation to make here, especially since the SVD is derived from the spectral theorem, is that the matrices $WW^T$ and $W^TW$ are symmetric positive semi-definite matrices. As mentioned by iljusch in the comments, we will stick to the case that $n = m$. In particular we find that $$ WW^T \;\; =\;\; UL^2U^T \hspace{3pc} W^TW \;\; =\;\; VL^2V^T. $$ Because they are positive semi-definite we can in fact find their square-roots since all of the singular values will be nonnegative: $$ \sqrt{WW^T} \;\; =\;\; ULU^T \hspace{3pc} \sqrt{W^TW} \;\; =\;\; VLV^T. $$ Therefore in computing even powers we find that square-roots can serve as a key to performing your computations without actually performing SVD: \begin{eqnarray*} \sqrt{WW^T}W & = & UL^2V^T \\ \sqrt{WW^T}(WW^TW) & = & \left (WW^T\right )^{3/2}W \\ & = & UL^4V^T. \\ \end{eqnarray*} Looks as if your general formula for positive integers is given by: $$ UL^kV^T \;\; =\;\; \left (WW^T\right )^{\frac{k-1}{2}}W. $$ This can just as well be formulated as $$ UL^kV^T \;\; =\;\; W \left (W^TW\right )^{\frac{k-1}{2}}. $$ The one remaining problem I see is whether or not this ultimately lowers your computational complexity. This might be an interesting read for you.
Show function is continuous at origin
You have to take limit as $x$ and $y$ tend to $0$. Use the fact that $|f(x,y) \leq |xy^{2}|$ to see that $f(x,y) \to 0$ as $(x,y) \to (0,0)$.
Expected Ratio of Coin Flips
Your bullet points amount to saying that you're going to flip the coin until the number of heads exceeds the number of tails. Suppose that this happens on the $n$-th flip; then after $n-1$ flips you must have had equal numbers of heads and tails, so $n=2m+1$ for some $m$, you now have $m+1$ heads, and the ratio of heads to flips is $\frac{m+1}{2m+1}$. If $p_n$ is the probability of stopping after the $(2n+1)$-st flip, the expected ratio of heads to flips is $$\sum_{n\ge 0}\frac{p_n(n+1)}{2n+1}\;.$$ Thus, the first step is to determine the $p_n$. Clearly $p_0=\frac12$: we stop after $1$ toss if and only if we get a head. If we stop after $2n+1$ tosses, where $n>0$, the last toss must be a head, half of the first $2n$ tosses must be heads, and for $k=1,2,\dots,2n$ the first $k$ tosses must not include more heads than tails. The problem of counting such sequences is well-known: these are Dyck words of length $2n$, and there are $C_n$ of them, where $$C_n=\frac1{n+1}\binom{2n}n$$ is the $n$-th Catalan number. Each of those $C_n$ sequences occurs with probability $\left(\frac12\right)^{2n}$, and each is followed by a head with probability $\frac12$, so $$p_n=C_n\left(\frac12\right)^{2n}\cdot\frac12\;,$$ and the expected ratio is $$\frac12\sum_{n\ge 0}C_n\left(\frac12\right)^{2n}\frac{n+1}{2n+1}=\frac12\sum_{n\ge 0}\frac1{4^n(2n+1)}\binom{2n}n\;.$$ Very conveniently, the Taylor series for $\arcsin x$ is $$\arcsin x=\sum_{n\ge 0}\frac1{4^n(2n+1)}\binom{2n}nx^{n+1}\;,$$ valid for $|x|\le 1$, so the expected ratio is $$\frac12\sum_{n\ge 0}\frac1{4^n(2n+1)}\binom{2n}n=\frac12\arcsin 1\approx 0.7854\;.$$ Added: I should emphasize that this calculation applies only to the stated strategy. As others have noted, that strategy is known not to be optimal, though it's quite a good one, especially for being so simple.
if $f(a_n)=(-1)^n \cdot n$ then $f'(x)$ returns any real value
You can use the mean value theorem and Darboux's theorem. The first one shows that there exists $c_n \in (a_{n+1}, a_{n})$ where $|f'(c_n)|$ can be arbitrarily large with alternating signs as $n$ grows, and the second theorem says that intermediate value property holds for $f'(x)$ for any differentiable (not necessary $C^{1}$ function $f(x)$.
Is $x^4 + 4$ irreducible in $\mathbb{Z}_5$?
\begin{align} x^4+4&=x^4-1\\ &=(x^2-1)(x^2+1)\\ &=(x-1)(x+1)(x^2-4)\\ &=(x-1)(x+1)(x+2)(x-2)\\ &=(x-1)(x-2)(x-3)(x-4). \end{align}
How to find the primitive function of $\frac{1}{5+2\sin x-\cos x}$
Remember that on each interval of the type $(\pi+2k\pi, x+(2k+2)\pi)$ your antiderivative is actually. $$\int_{}^{}\frac{dx}{5+2\sin x-\cos x}=\frac{\textrm{arctan}\left(\frac{3\tan\frac{x}{2}+1}{\sqrt{5}}\right)}{\sqrt{5}} +C$$ Now, at $x=\pi+2k\pi$ your function has a jump of $\frac{\pi}{\sqrt{5}}$, so if you chose your constant as $$\int_{}^{}\frac{dx}{5+2\sin x-\cos x}=\frac{\textrm{arctan}\left(\frac{3\tan\frac{x}{2}+1}{\sqrt{5}}\right)}{\sqrt{5}} + k \frac{\pi}{\sqrt{5}} \,;\, x \in (\pi+2k\pi, x+(2k+2)\pi)$$ your antiderivative becomes continuous on $\mathbb R$... To make it more accurate, let $$ g: \mathbb R \backslash \{ \pi+ 2k \pi \} \,;\, g(x)= k \frac{\pi}{\sqrt{5}} \,;\, x \in (\pi+2k\pi, x+(2k+2)\pi) \,,$$ Then $g$ is locally constant, and $$\frac{\textrm{arctan}\left(\frac{3\tan\frac{x}{2}+1}{\sqrt{5}}\right)}{\sqrt{5}} +g(x) +C \,,$$ is an antiderivative of your function on $\mathbb R \backslash \{ \pi+ 2k \pi \}$ which has removable discontinuities at $ \{ \pi+ 2k \pi \}$, thus it can be extended to a continuous function on $\mathbb R$.
Simple number theory problem
If $(c^6 - 3)/(c^2 + 2)$ is an integer, then so is $$\frac{c^6 - 3}{c^2 + 2} - (c^4 - 2c^2 + 4) = \frac{c^6 - 3}{c^2 + 2} - \frac{c^6 + 8}{c^2 + 2} = \frac{-11}{c^2 + 2},$$ that is, $c^2 + 2$ divides $11$. The only way this can happen is if $c^2 = 9$, so $c = \pm 3$.
Are imaginary numbers really incomparable?
There is no way to order complex numbers, in a way that preserves operations in a sensible way. The precise term is ordered field; among their properties, $x^2 \ge 0$ for every $x$. Since $i^2=-1$, we would need $-1\ge 0$, which is impossible. However, if you want to measure distance, you can do that with a norm. In the complex numbers, this is calculated as $|a+bi|=\sqrt{a^2+b^2}$. Hence, it is correct to say that $2i$ is twice as far from the origin as $i$ is.
Finding a convex function dominated (point wise) by a given positive function on (0,1)
The supremum of a family of convex functions is again convex. So let $$h(x) = \sup \{ \varphi(x) : \varphi \text{ is convex and } \varphi(t) \leqslant g(t) \text{ for all } t \in (0,1)\}.$$ Then $h$ is a convex function, and $h(x) \leqslant g(x)$ for all $x\in (0,1)$. Now it remains to see that $\lim\limits_{x\to 0} h(x) = +\infty$. Let $M > 0$. Since $\lim\limits_{x\to 0} g(x) =+\infty$, there is an $\varepsilon > 0$ such that $x \leqslant \varepsilon \implies g(x) > 2M$. Then $$\psi(x) = \begin{cases} 2M\bigl(1 - \frac{x}{\varepsilon}\bigr) &, 0 < x < \varepsilon \\ \qquad 0 &, \varepsilon \leqslant x < 1\end{cases}$$ is a convex function on $(0,1)$ with $\psi(x) < g(x)$ for all $x\in (0,1)$ and $\psi(x) > M$ for $x < \frac{\varepsilon}{2}$, hence we have $h(x) > M$ for $x < \frac{\varepsilon}{2}$. Since $M$ was arbitrary, this shows $\lim\limits_{x\to 0} h(x) = +\infty$.
Poisson Process question, why's my answer wrong?
Say $X$ is the number of customers that arrive per minute. Then you worked out $$P(X=4)$$ but the question is actually asking you to work out $$P(X\le4)$$ You can do this in one of two ways: 1) $$P(X\le4)=P(X=0)+P(X=1)+P(X=2)+P(X=3)+P(X=4)$$ 2) Use some Poisson tables to find this probability.
Absolute value inequality, where am I wrong?
When you take the root of an inequality, you have to make sure that everything is positive and then take positive roots. So after taking the roots, you get: $\frac 12 < |x-\frac 52| < \frac 32$. Now, you can regard the two cases $x> \frac 52$ and $x < \frac 52$ to eliminate the absolute value.
Abstract algebra olympiad problem for university student
A book is "Putnam and Beyond" https://link.springer.com/book/10.1007/978-0-387-68445-1
How to associate a function to an expression?
I think that you question is slightly ambiguous as I'm not really sure what do you mean by "associate a function to an expression"; nevertheless let me give a couple of observations on this issue. First and foremost, when talking about "expressions" one should fix a language over which these expressions are constructed; from your given example about the real numbers, I will take my expressions from now onwards to be formulated in the first-order language of fields $\mathscr L = \{+, \cdot, -, 0, 1\}$. As you've clarified in your comment, "expressions" here mean $\mathscr L$-terms; in particular, $\mathscr L$-terms in the language of fields are finite sequences of symbols from $\mathscr L$ of the form $$t_1(x):= x,$$ or $$t_1(x_1, x_2):= -3\cdot x_1^2 +2\cdot x_2,$$ or $$t_3(x_1,x_2,x_3) := x_1\cdot x_2 \cdot x_3,$$ for example. Note that $\mathscr L$-terms are nothing more than syntactical constructions which we can interpret (i.e. "give meaning") in an $\mathscr L$-structure. In your example, you work with the $\mathscr L$-structure $\mathscr M$ whose domain is the set of real numbers and where each of the functions $+, \cdot, -$ and constants $0,1$ are interpreted in the standard way, so that each $\mathscr L$-term is interpreted as a polynomial with coefficients in $\mathbb Z$, in one (e.g. $t_1(x)$) or multiple variables (e.g. $t_3(x_1, x_2, x_3)$). Surely one can associate to each of these polynomials a polynomial map, but in general there is no canonical way of doing this, even after specifying the language and the $\mathscr L$-structure in which we interpret our terms. In our example, the term $t_1(x) := x$ can be seen as a term in one variable, and in such way we could associate to the interpretation of this term the identity map on $\mathbb R$. However, we could also associate to it the projection map onto, say, the third co-ordinate, sice $t_1$ is also a term in the free variables $w,v,x,z$; remember that the notation $t_1(x)$ means that $t_1$ has free variables amongst $\{x\}$, so in particular a term in the free variable $x$ is also a term in the free variables $x, v, x$ and $z$. By the same reasoning, the interpretation of the term $t_4: =x_1 + x_2 + x_3 -x_3$ in $\mathscr M$ could be associated to a polynomial map on $\mathbb R^3$ or on $\mathbb R^n$ for any $n \in \mathbb N^{\geq 3}$. We can also associate to it a polynomial map on $\mathbb R^2$ since the interpretation of $t_4$ and of $x_1 +x_2$ in $\mathscr M$ coincides; however, if we have another $\mathscr L$-structure $\mathscr M'$ in which we interpret $-$ in the same way we interpret $+$ (yes, we can do this!), then the interpretation of $t_4$ cannot be associated to a polynomial map on $\mathbb R^2$.
About a polynomial in two variables
If we want $AX^2+BXY+CY^2+DX+EY+F=(GX+HY+J)^2$, we can just expand the square to get $G^2X^2+2GHXY+H^2Y^2+2GJX+2HJY+J^2$ and equate like terms. You actually get a few tests: $B^2-4AC=0, D^2-4AF=0,E^2-4CF=0$, all of which have to pass.
Tiling a rectangle with tetris pieces of T-shape
I answered my question so this question is answered and accepted. (unless other users post something here, i'll accept my answer when I'm allowed to) Walkup's argument was the following. Let the table be given by $\{0,1,2,...,m\}\times \{0,1,2,...,n\}$. The idea was to prove that points of the form $(2r,2s)$ where $r\not\equiv s\pmod2$ can't be a corner of any T-tetromino and any edge with an endpoint of the form $(2r,2s)$ where $r\equiv s\pmod2$ has to be a boundary of a T-tetromino. Walkup proved this by strong induction claiming the statement is true for all such points $(x,y)$ with $x+y\le 4\lambda$ and use $\lambda$ to do math induction.
about Cartan's theorem
Consider the map $\det\colon GL(n,\Bbb C)\longrightarrow\Bbb C$. It is continuous and therefore $\det^{-1}\bigl(\{1\}\bigr)$ is closed. But $\det^{-1}\bigl(\{1\}\bigr)=SL(n,\Bbb C)$. And its dimension is$$\dim\mathfrak{sl}(n,\Bbb C)=\dim\left\{M\in\Bbb C^{n\times n}\,\middle|\,\operatorname{tr}M=0\right\}=n^2-1.$$
Proof by contradiction that if a set $A$ contains $n_0$ and contains $k+1$ whenever it contains $k$, then it contains all numbers $\ge n_0$
I thought that since the proof only needed a bit of refinement I would restate it here with the corrections mentioned: Let $N_0=\{i:i\in \mathbb{N}, i\geq n_0\}$ and let $B=N_0 \setminus A$ and given $B\neq \emptyset$ we can invoke the well ordering principle. Let the least element of B be defined such that: $m_0\in B $ and $m_0$ is the least element of $B$. Since $m_0 \neq n_0$ this implies that $m_0 \gt n_0$ we can then deduce that $m_0-1 \not\in B \implies m_0-1\in A$ but by the given condition that $k+1\in A$ if $k\in A$ then $m_0\in A \therefore m_0\not\in B$ which is a contradiction. Therefore $B$ must be empty.
Flux Integral in spherical coordinates
We know from gauss' law that $$\int E.ds = \frac {Q_{in}}{\epsilon_0} $$ Lets apply this, for a point charge at origin where $$E=\frac {Q_{in}}{4\pi \epsilon_0 r^2} e_r$$ So in your problem you are essentially doing the same thing where $$\frac {Q_{in}}{4\pi \epsilon_0}$$ is taken to the other side of the Gauss's equation So your answer is correct I am not so good at working with latex for vectors so take vectors for the symbols where necessary.
Set Theory: Equivalency of Axiom of Choice and Choice Function
The point is that we assume one form of $\sf AC$, and prove another. The form $\sf AC_1$ is a statement only applied to families of pairwise disjoint families, but $\sf AC_2$ is about any family of non-empty sets. So we start with an arbitrary family of non-empty sets, and now we want to appeal to $\sf AC_1$, so we need to find a family of pairwise disjoint sets. Given just two sets $A$ and $B$, the simplest way of making them pairwise disjoint is to replace them by $\{0\}\times A$ and $\{1\}\times B$. Why are these disjoint? Because the elements of $\{0\}\times A$ are ordered pairs of the form $\langle 0,a\rangle$; and similarly the elements of $\{1\}\times B$ are ordered pairs of the form $\langle 1,b\rangle$. By the properties of ordered pairs, it is impossible that $\langle 0,a\rangle=\langle 1,b\rangle$, since $0\neq 1$. The idea now is similar. We replace each $X$ by $\{X\}\times X$. So now if we have any $X,Y$ in the original family $\cal F$, then $(\{X\}\times X)\cap(\{Y\}\times Y)$ is not empty if and only if $X=Y$ by the same argument showing that $\{0\}\times A$ is disjoint from $\{1\}\times B$. So we can apply $\sf AC_1$ to $\cal F^*$, and obtain a set which happens to be a function. So to your question, the axiom of choice itself has nothing to do with ordered pairs. They are just a very useful way of making a family of sets into a family of pairwise disjoint sets, and by planning well, we can also obtain the choice function that we are looking for from $\cal F$ with minimal effort.
Could you please explain why the remainder when the square of 49 is divided by the square root of 49 is 0?
I think you're confusing "remainder" with "quotient." If you divide $49^2$ by $7$ the quotient is $343$. The remainder is $0$.
Please help with this approximation problem
Percent Error = $ \frac {Vobserved - Vtrue} {Vtrue} = \frac{2.715 - 2.7145} {2.7145} = \frac {0.00049999999999972} { 2.7145} * 100 = 0.018419598452744 % $
$\lim_{k\to\infty}\int_{1/k}^k f(x)\,dx \neq \int_0^{\infty}f(x)\,dx$
Let $f$ be any nice continuous function on $[0,1]$ with $f(0)=f(1)$ and $\int_0^{1} f(x) \, dx=0$ but $\int_0^{\frac 1 2} f(x) \, dx\neq 0$. Extend $f$ to a continuous periodic function with period $1$. Then the first limit is $0$ but the second limit does not exist
What is the correct seminormal representation of (23) corresponding to the [22] partition of S4?
It's just a mistake in the thesis. If you work out the character table for $S_4$, you find that the 2-dimensional irreducible representation, which corresponds to the [22] partition, has character 2 on the conjugacy class of (12)(34). Therefore (12)(34) and its conjugates are the identity map. So (12) and (34) must be the same map.
Understanding why the family of sets is not an algebra
I assume $n \geq 2$ (if $n=1$, then $\mathcal{F}= \{\emptyset, \{1,2\}\}$ is closed under finite unions). The union $\{1,2,3\}=\{1,2\}\cup \{2,3\}$ is not in $\mathcal{F}$ so $\mathcal{F}$ is not closed under finite unions.
Number of matrices with zero determinant
Over a finite field of order $q$, it is easy to count matrices with nonzero determinant (and from there, get matrices with zero determinant by complementary counting). There are $q^n-1$ choices for the first column: we just have to choose a nonzero vector in $\mathbb F_q^n$. Then there are $q^n-q$ choices for the second column: we must avoid choosing any multiple of the first column. There are $q^n-q^2$ choices for the third column (since there are $q^2$ distinct linear combinations of the first two columns, which we must avoid, and so on). So the number of $n\times n$ matrices over $\mathbb F_q$ with nonzero determinant is $$ \prod_{k=0}^{n-1} (q^n - q^k) $$ and then $q^{n^2}$ minus this product gives the number of matrices with zero determinant. But $\mathbb Z_4$ (if by this you mean the integers modulo $4$) is not a field, and so this calculation doesn't quite work. For example, $$\det \begin{bmatrix}2 & 0 \\ 0 & 2\end{bmatrix} \equiv 0 \pmod 4$$ even though the second column is not a multiple of the first. Here is a general formula for $n\times n$ matrices over $\mathbb Z_4$. Write the matrix as $A + 2B$, where $A,B$ have entries from $\{0,1\}$. Suppose that mod $2$, $A$ has corank $r$ (rank $n-r$). Then we can choose $r$ independent (mod $2$) vectors $\mathbf x_1, \dots, \mathbf x_r$ such that $A\mathbf x_i \equiv \mathbf 0 \pmod 2$. Extend these to a set of $n$ vectors that are independent mod $2$, and let $X$ be the matrix whose columns are these $n$ vectors. Because $X$ is invertible mod $2$, $X$ is invertible mod $4$, so $\det(A + 2B) = \det(X(A+2B)X^{-1}) = \det(XAX^{-1} + 2 XBX^{-1})$. If we find a decomposition of $XAX^{-1} + 2 XBX^{-1}$ into $A' + 2B'$, where $A', B'$ have entries from $\{0,1\}$, then the first $r$ rows of $A'$ are $0$ and the last $n-r$ rows are linearly independent. It follows that if $r \ge 2$, then $\det(A + 2B)$ is definitely $0$: a row expansion by the first row of $A' + 2B'$ picks up a factor of $2$, and a row expansion by the second row of $A' + 2B'$ picks up another factor of $2$. What about when $r=1$? When $r=1$, we can factor out a $2$ from the first row of $A' + 2B'$. This lets us rewrite $\det(A' + 2B')$ as $2 \cdot \det(A'' + 2B'')$ where: the first row of $A''$ is the first row of $B'$, and other rows are taken from $A'$. the first row of $B''$ is $0$, and other rows are taken from $B'$. If $\det(A' + 2B') = 2$, then $\det(A'' + 2B'')$ is odd, which means $A''$ is invertible mod $2$. Given the last $n-1$ rows of $A''$ (which were fixed by $A'$), $2^{n-1}$ out of $2^n$ choices for the first row of $A''$ (which comes from $B'$) will make $A''$ invertible mod $2$. Therefore half of the possible choices for $B'$ (which come from half of the possible choices for $B$) give determinant $2$ here. So exactly half of the matrices with $r=1$ will have determinant $0$, and the other half will have determinant $2$. Thus, if we want to count the matrices mod $4$ with determinant $0$, let $N_0, N_1, N_{\ge 2}$ (with $N_0 + N_1 + N_{\ge 2} = 2^{n^2}$) be the number of matrices over $\mathbb F_2$ with corank $0$, corank $1$, and corank $\ge 2$. Then the number we're interested in is $$ N_1 \cdot 2^{n^2-1} + N_{\ge 2} \cdot 2^{n^2}. $$ (When $A$ is counted by $N_{\ge 2}$, any matrix $B$ will make $\det(A+2B)=0$; when $A$ is counted by $N_1$, half of all possible matrices will do.) For example, when $n=2$, $N_1 = 9$ and $N_{\ge 2} = 1$, so there are $9 \cdot 2^3 + 1 \cdot 2^4 = 72 + 16 = 88$ matrices. We have $N_0 = \prod_{k=0}^{n-1} (2^n - 2^k)$ by the formula for the first section. Counting $N_1$ is a bit harder, but we can do it: if we first choose a linearly dependent row on the $i^{\text{th}}$ step ($i=0, 1,\dots, n-1$), the number of ways is $2^i \prod_{k=0}^{n-2} (2^n - 2^k)$, so altogether $N_1 = (2^n - 1)\prod_{k=0}^{n-2} (2^n - 2^k)$. So the general formula for $n \times n$ matrices of $\mathbb Z_4$ is $$ 2^{n^2-1}(2^n - 1)\prod_{k=0}^{n-2} (2^n - 2^k) + 2^{n^2}\left(2^{n^2} - \prod_{k=0}^{n-1} (2^n - 2^k) - (2^n - 1)\prod_{k=0}^{n-2} (2^n - 2^k)\right). $$ Some algebra simplifies this to $$ 4^{n^2} - 2^{n^2-1} (2^{n+1}-1) \prod_{k=0}^{n-2} (2^n - 2^k). $$ The sequence begins $1, 88, 100864, 1735131136, \dots$. For $\mathbb Z_{p^2}$ in general, similar reasoning should work.
Are there implicit and explicit solution methods for elliptic equation?
There is a good reason for that... In elliptic equations you do not have the time variable, just space. You have for instance the values at the boundary but you must solve for the values in the interior, as they are not given. This is not the case in parabolic problems, where you know the solution in the whole space domain at the initial time.
Proving $ \mathbf{x} \perp \mathbf{y} \implies ||\mathbf{x}+\mathbf{y}||^2 =||\mathbf{x}||^2+||\mathbf{y}||^2 $
By definition $$x\perp y\iff x\cdot y=0$$ So the mixed term cancels and you are left with the squares.
Minimizing the Sum of Binomial Random Variables
Here's how I would think about this problem: we can think of the probability of finding the climber as the probability of finding the climber given she went to mount A (times the probability she went there), plus the probability of finding the climber given she went to mount B (times the probability she went there). That is, \begin{align*} P(\text{find})&=P(\text{find}\ A)P(A)+P(\text{find}|B)P(B)= P(\text{find}| A)0.6+P(\text{find}|B)0.4\\ &= (1-P(\text{no find}|A))0.6+(1-P(\text{no find}|B))0.4\\ &=(1-0.7^a)0.6+(1-0.7^b)0.4\\ &= (1-0.7^x)0.6+(1-0.7^{(12-x)}0.4), \end{align*} where $a$ denotes the number of search parties deployed to mount $A$, and $b$ to $B$, so $a+b = 12$; the last equality follows from $x=a$, so $b=12-x$. Note the third line follows from the fact that $P(\text{no find}|A)= P(\text{1 doesn't find}|A)*P(\text{2 doesn't find}|A)*\cdots * P(\text{a doesn't find}|A)=(0.7)^a$. To find the find the number of parties that should be sent to mount $A$, maximize the expression above in $x$, which will tell you that they should send $x=7$ parties.
Monotonicity in coupling of Markov chains
You are correct in saying that you don't need monotonicity to prove that coupling inequality. It refers to the coupling of two probability distributions. This is different than coupling two (or more) sample paths of a Markov chain. Imagine running sample paths of a Markov chain starting from all points in the state space, and using the same random inputs. If all of the paths come together, they will stay together or "coupled" for all future time points by the Markov property that says "where you go just depends on where you are and some random input". If the sample paths come together, they all "are" in the same place and if you are using the same random input for all paths they will stay coupled (in this path sense). For path coupling, you also don't need monotonicity. However, if you have it then it means that paths, run with the same random inputs, will maintain their order and never cross. So, if you are trying to check whether or not all possible sample paths have coupled (in the path sense), you could follow just two paths-- one starting from the bottom of the space and one starting from the top.
Probability of tic tac toe winning based on coin flips
A tie can occur in the following ways. First, we could get a board where nobody has 3-in-a-row. In that case, up to symmetry, the board must look one of the three tied positions you can achieve in tic-tac-toe: X X O X X O X X O O X X O O X O O X X O O X X O X O X There are $32$ boards in this category. For the first board, we can rotate it in $4$ ways, reflect it or not, and swap X with O or not, giving $16$ variations; for the second and third, reflections don't help (since they're symmetric) so we get $8$ variations of each. Second, we could get a board where both players have one 3-in-a-row. No player can claim a diagonal: if that happens, there is no other 3-in-a-row for the other player to claim. Similarly, no player can claim both a vertical and a horizontal 3-in-a-row. So we are left with boards like this one: X X X O O O X O O There are $72$ boards in this category. We can choose in $2$ ways whether the 3-in-a-rows are vertical or horizontal. Once we do, we can permute the "win for X" line, the "win for O" line, and the third line in $3! = 6$ ways, and specify the outcomes along the third line in $2^3 - 2 = 6$ ways (it cannot be X X X or O O O, but any of the other outcomes are valid). So the probability of a tie is $\frac{32 + 72}{2^9} = \frac{104}{512} = \frac{13}{64}$, leaving a probability of $\frac{51}{128}$ that X wins and a probability of $\frac{51}{128}$ that O wins. (Each of these is just over $39.8\%$.)
What is the formula for $(x_1 - x_2 - ... - x_n)^2$
Here you find the formula which covers all your cases https://en.wikipedia.org/wiki/Multinomial_theorem. Since the $x_i$, $i=1, ..., m$ in the mentioned theorem can be negative as well you do not have to distinguish between $+$ and $-$ inside the brackets.
Complex number in a 3x3 matrix
Just use your usual methods of solving linear systems of equations, but using arithmetic of complex numbers.
Show $\mathbb{F}_2$ is a field
To show $(F,+,\cdot)$ is a field, it is enough to show $(F,+)$ is an abelian group $(F\backslash \{0\}, \cdot)$ is an abelian group $\cdot$ distributes over $+$ in the usual way (on both sides)
Find a subset of $\mathbb{R}^2$ that is path connected but is locally connected at none of its points.
Let $T$ be a previous space, and define $S$ as follows : Let $Y$ denote the set of rational points of the interval $[0,1]\times \{1\}$. Let $S$ denote the union of all line segment between the point in $Y$ and the point $q=(1,0)$. Consider $S\cup T$.
Proving $((a\bmod n) (b\bmod n))\bmod n = ab\bmod n $
Note that by the Chinese Remainder Theorem, for any $k\gt 0$, and for any integers $a$ and $b$, there exists an integer $x$ such that $x\bmod k = a\bmod k$ and $x\bmod (k+1) = b\bmod (k+1)$. That is: the remainders of $x$ modulo $k$ and modulo $k+1$ are completely unrelated. So I do not see how you are going to be able to leverage "knowing" the result modulo $k$ into a proof of the result modulo $k+1$, unless you simply prove it directly modulo $k+1$. So it is really simpler to show that the result holds modulo $n$ directly, for any $n\gt 0$. Remember that $x\bmod n = r$ if and only if $0\leq r\lt n$ and $x-r$ is a multiple of $n$. So first show that $ab - (a\bmod n)(b\bmod n)$ is a multiple of $n$. For example, if $a\bmod n = r$ and $b\bmod n = s$, then $a-r$ and $b-s$ are both multiples of $n$; then $(a-r)b$ is a multiple of $n$, and $r(b-s)$ is a multiple of $n$, so...
Minimum Spanning Tree that minimizes a function
Let $T$ be a minimum spanning tree of $G$. Suppose there exists a spanning tree $T'$ such that $f(T')<f(T)$. Let $e, e'$ be the corresponding edges of maximum weight in $T, T'$, respectively. If we remove $e$ from $T$, then $T$ is broken into two connected components. There must exist an edge $e''$ in $T'$ which connects these components. Clearly $w_{e''}\leqslant w_{e'}< w_e$. Thus the tree $T''$ obtained from $T$ by replacing the edge $e$ with $e''$ has weight $w(T'') = w(T) + w_{e''} - w_e < w(T)$, a contradiction.
Another probability combinatorics problem inspired by Bible
Write $C_i = \{ a_i, A_i, e_i, E_i\}$ for the 4 original chromosome $i$'s present in Adam and Eve. Let Noah's chromosomal makeup be $N = (n_1, N_1,\dots, N_{23})$, a 46-vector in $\Omega = C_1 \times C_1 \times \dots \times C_{23}\times C_{23}$. To simplify things, we know that Noah's sons aren't going to bring any new chromosomes to the table so let's start by answering "what's the probability that all of $C_1$ can be found in Noah, his wife, or the three brothers' wives, assuming they draw their chromosomes at random from the original pool." Since each person in question gets two 1-chromosomes, we have 10 draws from the set $\{a_1,A_1,e_1,E_1\}$ and, since the draws are independent, elementary probability gives that $$ P(\textrm{some chromosome in } C_1 \textrm{ is missing from the survivors}) = {3^{10} \over 2^{18}} \approx {9 \over 40}. $$ Then it follows that (each chromosome being independent of the others), $$ P(\textrm{some chromosome in } \cup_1^{23} C_i \textrm{ is missing }) \approx 1 - \left({31 \over 40} \right)^{23} \approx 0.997.$$ That is, given the assumptions (enough generations have gone by that the draws from $C_i$ are nearly independent; a grossly oversimplified model of reproduction and mutation; etc.) the absence of a chromosome is almost certain. That being said, events far less probable occur regularly throughout the Bible, so I wouldn't look at this as anything more than entertainment. Calculations like this are just as likely to reaffirm faith as to vindicate doubt.
Algorithm step question.
If we apply an algorithm for $0$ steps, this is generally interpreted as "don't do anything." This is analogous to applying a function zero time, were $f(x)=x^2$ applies zero times to $7$ yields $7$.
differentiation of multivalued function integration
You can calculate it this way: $$g(\alpha,\beta)=\int_0^{\alpha} f(x,\beta)dx$$ $$\frac{d}{dt}g(t,t)=(\partial_{\alpha}g)(t,t) + (\partial_{\beta}g)(t,t)$$ $$\frac{d}{dt}g(t,t)=f(t,t)+\int_0^{t} (\partial_{\beta}f)(x,t)dx$$
How to solve nonlinear partial differential equation with two variables
This is a nonlinear PDE, so Laplace transform is unlikely to help. Also, it's one equation but with two dependent variables $f$ and $g$. You might make a more-or-less arbitrary choice for one of them, and hope to solve (at least locally) for the other. Some easy solutions are obtained by assuming $f$ or $g$ depends only on $t$.
Find $a_{2012}-3a_{2010}/3 a_{2011}$ where the sequence $a_n$ is determined by roots of a quadratic equation
We have$$a_{2012}-3a_{2010}=(\alpha^{2012}-\beta^{2012})-3(\alpha^{2010}-\beta^{2010})=$$ $$=\alpha^{2010}(\alpha^2-3)-\beta^{2010}(\beta^2-3)$$ But $\alpha^2-3=9\alpha$ and $\beta^2-3=9\beta$, since each of them satisfy the quadratic equation. Hence we may simplify as $$\alpha^{2010}(9\alpha) - \beta^{2010}(9\beta)=9(\alpha^{2011}-\beta^{2011})=9a_{2011}$$ Dividing by $3a_{2011}$ gives just 3. The other problem is similar.
Sigma locally finite basis in (R, usual metric)
Let $\mathscr{B}$ be the base consisting of the open intervals in $\Bbb R$ with rational endpoints. $\mathscr{B}$ is countable, so we can enumerate it as $\mathscr{B}=\{B_n:n\in\Bbb N\}$. For $n\in\Bbb N$ let $\mathscr{B}_n=\{B_n\}$. Then each $\mathscr{B}_n$ is trivially locally finite, so $\mathscr{B}$ is $\sigma$-locally finite. This argument of course works equally well in any second countable space: a countable base is automatically $\sigma$-locally finite, even $\sigma$-locally discrete.
Normal Bundle is a manifold in Pollack
Both sets are $\{(u,v)|u\in U \text{ and } v\in N_u(Y) \}$.
Is this ODE-integral problem solvable?
I may have an answer now to my own question that turns out to be much simpler than I thought. It does not even depend on the ODE at all. We simply do integration by parts on $$\int_0^\infty e^{-\int_0^\tau x(s)ds}x(\tau)d\tau $$ as follows: $$\frac \delta {\delta \tau}e^{-\int_0^\tau x(s)ds}=-e^{-\int_0^\tau x(s)ds}\cdot \frac \delta {\delta \tau}\left(\int_0^\tau x(s)ds\right)$$ Apply the fundamental theorem of calculus $$=-e^{-\int_0^\tau x(s)ds}\cdot x(\tau)$$ Now integrate both sides w.r.t $\tau$ from $0$ to $\infty$ and multiply by $-1$: $$-e^{-\int_0^\tau x(s)ds}\bigg|_0^\infty=\int _0^\infty e^{-\int_0^\tau x(s)ds}\cdot x(\tau)d\tau$$ The right hand side is what we want to evaluate and the left hand side is equal to: $$\lim_{\tau \to \infty}(-e^{-\int_0^\tau x(s)ds})+e^0$$ Assumption 1. Now if we assume $x(s)$ tends to a positive number as $s$ tends to infinity, then this limit is equal to $\lim_{\tau \to \infty}(-e^{-\tau})=0$ This gives $$\int _0^\infty e^{-\int_0^\tau x(s)ds}\cdot x(\tau)d\tau=1$$ Independent of the ODE. Followup question: What is the minimal assumption needed (instead of "Assumption 1"), to make sure this conclusion is correct?
Prove this function is convex using direct proof.
You can't, because it isn't. We have $$ \begin{align*} g'(x) &= xf'(x) + f(x) \\ g''(x) &= xf''(x)+f'(x)+f'(x) = xf''(x)+2f'(x). \end{align*} $$ Now, let $$ f(x) := -\frac{c}{3}x^3+cx+d. $$ For $d$ sufficiently large, we have $f(x)>0$ for $0<x<1$. We have $$ f'(x) = -cx^2+c, $$ so $f'(x)>0$ and $\lim_{x\to 0}=c$. Next, $$ f''(x) = -2cx. $$ Thus, $$ g''(x) = xf''(x)+2f'(x) = -2cx^2-2cx^2+2c = -4cx^2+2c, $$ which is negative for $x\to 1$. The key thing you need to look at is your equation $$ g''(x) = xf''(x)+2f'(x). $$ Thus, to control $g''$, you need to control both $f'$ and $f''$ across the entire interval $0<x<1$. Controlling $f'$ on the entire interval but $f''$ only pointwise (as in a limit for $f''(x)$ as $x\to 0$) is not enough. (If you add another condition on $\lim_{x\to 1} f''(x)$, I am confident we can create another counterexample that fails somewhere else on the interval.) Derivatives of well-behaved functions can behave quite badly indeed. Where does your proposed proof break down? You switch quantifiers in the middle without noticing. Your function $h$ indeed satisfies $h'(x)<0$, but not for all $x$, but only for some $x$ near $1$. Therefore, your integral inequality $$ \int_x^1tf'(t)\,dt \geq\int_x^1\frac{f'(1)}{t}\,dt $$ does not necessarily hold. Bottom line: explicitly write out quantifiers and make sure your deductions hold for the quantifiers you actually have. The following is my original answer to the question pre-edit, where the condition $\lim_{x\to 0}f'(x)=c>0$ was not yet imposed. You can't, because it isn't. Now, let $$ f(x) := -x-\frac{1}{x}+c, $$ which is positive for $0<x<1$ if $c$ is large enough. We have $$ \begin{align*} f'(x) = & -1+\frac{1}{x^2} \\ f''(x) = &-\frac{2}{x^3}, \end{align*} $$ so $f'(x)>0$ for $0<x<1$, and $f''(x)$ exists, but $$ g''(x) = xf''(x)+2f'(x) = -\frac{2}{x^2}-2+\frac{2}{x^2} = -2 <0. $$ Your proposed proof (without the condition added later) breaks down at the very end, where you write that $f(x) < f(1) + f'(x) \ln x, $ which implies $ \lim_{x \rightarrow 0} f(x) = -\infty $ This implication does not hold unless you can bound the behavior of $f'(x)$ as $x\to 0$. If you can't, $f'(x) \ln x$ can go elsewhere than to $-\infty$ as $x\to 0$. For instance, if $f'(x)=x$, then $\lim_{x\to 0}x\ln x=0$ by L'Hôpital's rule.
Probability-Bayes's rule
For (a), you've mixed up the summation index -- and what is $a_n$? The denominator should be $$\sum_{k=0}^N 2\frac{k}{N}\frac{N-k}{N}\frac{1}{N+1}= \frac{2}{N^2(N+1)}\sum_{k=0}^{N}(kN-k^2)=\frac{2}{N^2(N+1)} \left(N\sum_{k=0}^{N}k-\sum_{k=0}^{N}k^2\right). $$ The last two sums are all known; finally you should arrive in $\frac{N-1}{3N}$. For (b), the denominator is $$\sum_{k=0}^N \left(\frac{N-k}{N}\right)^2\frac{1}{N+1} =\frac{1}{N(N+1)}\sum_{k=0}^{N}(N-k)^2.$$ Similarly you should arrive in $\frac{2N+1}{3N}$.
Universal properties for kernels and cokernels
I'd like to start with your first question about "almost kernel". The following construction can be brought to the case of "almost cokernel" without much modification. Suppose $a:A\to M$ is an "almost kernel" and $k$ is the kernel map. By definition of $A$, $\varphi \circ a=0$ and there exists a map (not necessarily unique) $i:\text{Ker }\varphi \to A$ makes the following diagram commute: $$\require{AMScd} \begin{CD} @. \text{Ker } \varphi @>i>> A @.\\ @. @| @VVaV \\ 0 @>>> \text{Ker } \varphi @>k>> M @>\varphi>> N \end{CD}$$ On the other hand, by the universal property of kernel there exists an unique map $r:A\to \text{Ker }\varphi$ makes the following diagram commute: $$\require{AMScd} \begin{CD} A @>r>> \text{Ker } \varphi @.\\ @| @VVkV \\ A @>a>> M @>\varphi>> N \end{CD}$$ We obtain another commutative diagram: $$\require{AMScd} \begin{CD} @. \text{Ker } \varphi @>r\circ i>> \text{Ker }\varphi @.\\ @. @| @VVkV \\ 0 @>>> \text{Ker } \varphi @>k>> M @>\varphi>> N \end{CD}$$ Now by the universal property of kernel again $r\circ i=\text{id}_{\text{Ker}\varphi}$, since the identity also makes the above diagram commute. You can check that the existence of such $r,i$ is, in fact, the necessary and sufficient condition of an almost kernel. Ones can take $i$ to be the inclusion, then $r$ is called a retract of $A$ to $\text{Ker }\varphi$. Such examples come from direct sums, since the composition of the projection from $M\oplus N$ onto $M$ and the obvious inclusion $M\to M\oplus N$ is the identity on $M$. In this case, you can take $\varphi: \mathbb{Z}\to 0$, then the kernel is $\text{id}:\mathbb{Z}\to \mathbb{Z}$ and an almost kernel is $\text{pr}_1: \mathbb{Z}\oplus \mathbb{Z}\to \mathbb{Z}, (x,y)\mapsto x$. It is easy to see that $r=\text{pr}_1$ and a choice for $i:\mathbb{Z}\to \mathbb{Z}\oplus \mathbb{Z}$ is $x\mapsto (x,0)$. Another choice is $i':x\mapsto (x,x)$. In the above argument, the use of universal property is essential. The main purpose of universal property is to obtain the uniqueness of the kernel (or cokernel). Suppose that there are two kernel $k:K\to M$ and $k':K'\to M$ of $\varphi:M\to N$, then a similar argument to the one we presented above show that there exists unique $i:K\to K'$ and $j:K'\to K$ such that $k'=k\circ j,k=k'\circ i$ and $i\circ j=\text{id}_{K'},j\circ i=\text{id}_K$. The last two equations say that $K$ and $K'$ are isomorphic, and the previous two equations say that the isomorphisms are natural. The second advantage of the universal property which is not very clear at the moment is that universal property provides morphisms. This is the case with tensor product, and you will use this frequently when deals with diagrams ("diagram chasing" is the name of this method). Further, the notion of kernel and cokernel is important because they are the building blocks of abelian category. On an abelian category, we can do homological algebra, a ubiquitous computational tool of algebraists. I think what I wrote above will give you some grasp about how universality work. From your question, I think you have the same problem as mine when I first learn about category theory. The definition of kernel suggests that there are many kernels, but in every application of universal property they treat them like there is only one kernel. From the categorical viewpoint, two objects defined by the same universal property are identical. When there is a kernel, its universal property allows one to only care about the unique factorization. This is quite troublesome at first, but it is indeed an advantage of category theory.
Prove that $W$ is $T$-invariant if and only if $W^0$ is $T^t$-invariant.
$W$ is $T$-invariant, then $W^0$ is $T^t$-invariant. Let $f\in W^0$ (i.e. $f:V\rightarrow K$ such that $f(W)=0$). We want to show $T^t(f)\in W^0$. Take $w\in W$; by hypotesis $T(w)\in W$ and: $$[T^t(f)](w) = f(T(w)) = 0$$ so $T^t(f)\in W^0$. $W^0$ is $T^t$-invariant, then $W$ is $T$-invariant. Let $w\in W$, we want to show $T(w)\in W$. Take a functional $f\in W^0$; by hypotesis $T^t(f)\in W^0$ and $$f(T(w))=[T^t(f)](w)=0$$ so $T(w)\in \ker f$ for all $f\in W^0$. If we now show that $\bigcap_{f\in W^0}\ker f = W$ we have the statement. Clearly $W\subseteq \bigcap_{f\in W^0}\ker f$. Viceversa take $x\notin W$, a base $\mathcal B = \{x,v_2,...,v_n\}$ of $V$ and define the functional $f:V\rightarrow K$ such that $$f(x)\neq 0,\quad f(v_2)= \ ... \ =f(v_n)=0$$ $f\in W^0$, and $x\notin \ker f$. Hence $x\notin \bigcap_{f\in W^0}\ker f$ and we are done.
What number of robbers, under the model of the prisoner's dilemma, would be optimal?
Consider a set of $n$ players, each with utility function $U_{i}(\vec{e}) = a||\vec{e}|| - be_{i}$, for $a, b > 0$ and $e_{i} \in [0, 1]$. The $e_{i}$ terms represent effort. If we consider where $a < \frac{b}{n}$, then individuals have incentive to mooch of the system. So it's analogous to the prisoner's dilemma game, where everyone will turn each other in basically. However, if $a > \frac{b}{n}$, then $e_{i} = 1$, $\forall{i}$ becomes the dominant strategy. It's more about incentivizing here, than about throwing numbers at the problem. In game theory, actors are greedy. So you want cooperation to be the greedy choice. That will force cooperation as the Nash Equilibrium.
Change in eigenvalues if row and column added to highly symmetric matrix
Let $\mathbf{1}$ denote that all-ones column vector of length $n$, and $I$ and $J$ the identity and all-ones matrices of order $n$ respectively. $\newcommand{\one}{\mathbf 1}$ Theorem Let $M$ be the $(n + 1) \times (n + 1)$ matrix of the form $\begin{bmatrix}a & a\one^T\\ a\one & bJ\end{bmatrix}$, where $a \ne 0$ and $b$ are distinct real numbers. Then the eigenvalues of $M$ are: $0$ with multiplicity $n - 1$. The two roots of the equation $\lambda^2 - (a + nb)\lambda - na(a - b) = 0$, each with multiplicity $1$. Proof. Since $M$ is symmetric and has rank $2$, i.e., nullity $n - 1$, it has $0$ as an eigenvalue with multiplicity $n - 1$. Now, let $\lambda$ be a root of \begin{align} \lambda^2 - (a + nb)\lambda - na(a - b) = 0 \tag{1}\label{eq:lambda} \end{align} and define the vector $x = \begin{bmatrix}\lambda - nb \\ a \one\end{bmatrix}$ of length $n + 1$. Then \begin{align*} Mx & = \begin{bmatrix} (\lambda - nb)a + a^2 \one^T \one\\ (\lambda - nb)a \one + ab J \one \end{bmatrix}\\ &= \begin{bmatrix} \lambda a + na(a - b)\\ \lambda a \one \end{bmatrix} \end{align*} where the last step follows from $\one^T \one = n$ and $J \one = n \one$. Now observe that on rearranging \eqref{eq:lambda}, we get $\lambda(\lambda - nb) = \lambda a + na(a - b)$, which shows that $Mx = \lambda x$. Thus, $x$ is an eigenvector of $M$ corresponding to the eigenvalue $\lambda$, for each root $\lambda$ of \eqref{eq:lambda}. $\quad\square$
generating functions closed form
It is a Lambert series: $$ \sum_{n\geq 1}\frac{x^n}{1-x^n} = \sum_{m\geq 1}d(m)\,x^m$$ where $d(m)$ is the divisor function.
The set of one-sided inverses is not a group under multiplication
Take the ring of $A$ of all linear operators on the set of polynomials. Let $U$ be the set of all such operators with a left inverse. If $I$ and $D$ denote integration -- $I(p(x))$ is the integral with constant term $0$ -- and differentiation, respectively, then of course $DI(p(x))=p(x)$; hence $I$ is an element of $U$. Will every left inverse of $I$ be in $U$? If $D'$ is any left inverse of $I$, then for any $n \geq 1$, we have $D'(a_nx^n)=D'(I(na_nx^{n-1})=nax^{n-1}=D(a_nx^n)$. Note that $D'$ is not injective -- e.g., $D'(2x+1)=2=D'(2x+2)$ -- so $D$ can't have a left inverse. If $D'$ had a left inverse, then it's a map that has a left and right inverse. This would imply that $D'$ is a bijection from the set of polynomials to itself. This of course implies that $U$ is not a group as it's not closed under inverses.
Solve a system of two linear equations
$$ \left\{ \begin{array}{r} 2x+3y=1/10; \\ 3x+2y=1/8; \\ \end{array} \right. $$ Multiple $1$st equation by $3$, multiple $2$nd equation by $2$ (to get the same coeffficients near $x$): $$ \left\{ \begin{array}{r} 6x+9y=3/10; \\ 6x+4y=1/4; \\ \end{array} \right. $$ Then we subtract equations: $$(6x+9y)-(6x+4y)=3/10-1/4;$$ $$5y=6/20-5/20=1/20;$$ $$y=1/100.$$ Then substitute $y$ into $1$st equation (you can substitute into $2$nd equation too): $$2x+3\cdot 1/100 = 1/10;$$ $$2x=1/10-3/100 = 10/100-3/100=7/100;$$ $$x=7/200.$$ Solution: $x=7/200, y=1/100$.
Denotation of the range of a function using its definition
I have not seen that notation used before In practice. However, I have seen similar notation used in topology to denote a chart map: where $M$ is a manifold and $U \subseteq M$ then let $x: U \to x(U)$ be a chart map such that $x(U) \subseteq \mathbb{R}^d$. One thing to keep in mind is the relation between the two sets, I.e., the map $x : U \to x(U)$ is always given before how the map is defined e.g. $x(t) := t^{2}$ $\forall t \in U$. With that in mind, the order at which you have presented the information about the map $f$ is a little unconventional.
A proof for $\lim_{(x,y) \to (0,0)} \frac{\sin(x^2 + y^2)}{x^2 + y^2}$
$$ \lim_{(x,y) \to (0,0)} \dfrac{\sin(x^2 + y^2)}{x^2 + y^2} = 1 \iff \left[\forall r>0\ \ \ \exists \delta > 0\ \ \ d((x,y),(0,0))<\delta\implies \left| \dfrac{\sin(x^2 + y^2)}{x^2 + y^2} - 1\right|\le r\right]\\ \iff \left[\forall r>0\ \ \exists \delta > 0\ \ \ x^2+y^2<\delta^2\implies \left| \dfrac{\sin(x^2 + y^2)}{x^2 + y^2} - 1\right|\le r\right]\\ \iff \left[ \forall r>0\ \ \exists \delta > 0\ \ \ 0\le U<\delta^2\implies \left| \dfrac{\sin(U)}{U} - 1\right|\le r\right]\\ \iff \lim_{U \to 0^+} \dfrac{\sin(U)}{U} = 1 $$which is true because $\sin'(0) = \cos(0) = 1$. But both $x$ and $y$ must be close to zero.
Confusion about what variable to derive with respect to with L'Hopital's rule
In the usual definition of limit the value at the right side of the arrow (as in $\to c$) is a fixed limit point of the domain of the function , this means that it is a constant. The variable stay at the left side of the arrow (as the $x$ in $x\to c$) and, applying L'Hopital rule we have to derive with respect to this variable. So, the notation in your first example is really confusing. It is better to note that in the limit $$ \lim_{x\to a}\frac{f(x)-f(a)}{x-a} $$ there is aperfect symmetry between $x$ and $a$ so that we can write it as: $$ \lim_{a\to x}\frac{f(a)-f(x)}{a-x} $$ and use L'Hopital rule (deriving with respect to $a$) to find $$ \lim_{a\to x}\frac{f(a)-f(x)}{a-x}=\frac{f'(x)}{1} $$ In your second example the fact that $h$ is the variable is clear in the fact that the limit is for $h \to 0$.
Let $K$ a compact set. If $f:\mathbb{R}^n→\Bbb R^m$ is $C^1$ with $n<m$ then $f(K)$ has null measure.
I assume that's supposed to be $n&lt;m$. One way to proceed is as follows. Define $\tilde f: \Bbb R^m \to \Bbb R^m$ by $\tilde f(x_1,\dots,x_m) = f(x_1,\dots,x_n)$. Now, note that $\mu(f(K) = \mu(\tilde f(K \times [0,1]^{m-n})) \leq \mu(K \times [0,1]^{m-n}) \max_{x \in K \times [0,1]^{m-n}} J(\tilde f(\mathbf x))$, where $J(\tilde f)$ denotes the absolute value of the Jacobian determinant. Since $J(\tilde f(\mathbf x)) = 0$ for all $\mathbf x \in \Bbb R^m$, conclude that $\mu(f(K)) = 0$.
discrete-time to continuous-time state space
One can try to do the inverse of zero order hold discretization: \begin{align} \begin{bmatrix}A_d &amp; B_d \\ 0 &amp; I\end{bmatrix} &amp;= e^{\begin{bmatrix}A &amp; B \\ 0 &amp; 0\end{bmatrix} T}, \tag{1a} \\ C_d &amp;= C, \tag{1b} \\ D_d &amp;= D. \tag{1c} \end{align} So one has to take the matrix logarithm of the matrix containing $A_d$ and $B_d$ from $(1a)$. However, this is not always well defined, especially in the context of state space models of physical systems where $A$ and $B$ should only contain finite real values. For example if $A_d$ has one eigenvalue at some negative real value then $A$ should have one eigenvalue which has only a nonzero imaginary part and not an associated complex conjugate eigenvalue. Another example would be if $A_d$ has eigenvalues at zero, since the logarithm of zero is not well defined. However, one could also interpret those eigenvalues as a delay. Furthermore, a matrix logarithm it not unique since $e^{M+2\,\pi\,i\,k\,I}=e^{M},\ \forall\,k\in\mathbb{Z}$. However, one could add the assumption that the imaginary parts of the eigenvalues of $A$ have the smallest possible absolute value (which can also be interpret as to saying that the discrete system is not &quot;aliasing&quot; the dynamics of the continuous system). Another popular discretization method is the bilinear transform, a.k.a. the Tustin's method, which I believe gives a better behaved back and forth mapping between the continuous and discrete time representation. Namely, for discrete to continuous one can use the following relation $$ z = \frac{2 + s\,T}{2 - s\,T}. \tag{2} $$ Substituting $(2)$ in $$ z\,x = A_d\,x + B_d\,u, \tag{3} $$ and factoring out $s$ yields $$ s \left((A_d + I)\,x + B\,u\right) = \frac{2}{T} \left((A_d - I)\,x + B\,u\right). \tag{4} $$ Now by defining a new state vector as $x' = (A_d + I)\,x + B\,u$ and solving it for $x$ yields $$ x = (A_d + I)^{-1} (x' - B\,u). \tag{5} $$ Substituting $(5)$ in $(4)$ and $y = C_d\,x + D_d\,u$ yields \begin{align} s\,x' &amp;= A\,x' + B\,u, \tag{6a} \\ y &amp;= C\,x' + D\,u, \tag{6b} \end{align} with \begin{align} A &amp;= \frac{2}{T} (A_d - I)\,(A_d + I)^{-1}, \tag{7a} \\ B &amp;= \frac{2}{T} \left(I - (A_d - I)\,(A_d + I)^{-1}\right) B_d, \tag{7b} \\ C &amp;= C_d\,(A_d + I)^{-1}, \tag{7c} \\ D &amp;= D_d - C_d\,(A_d + I)^{-1} B_d, \tag{7d} \\ \end{align} which should always allow one to define a mapping from discrete to continuous if minus one is not an eigenvalue of $A_d$.
The continuity of a distance function
For all $a \in A$ we have $f(y) \le d(y,a) \le d(x,y)+d(x,a)$, taking the infimum over $a$ gives $f(y) \le d(x,y) + f(x)$. The same argument applies, mutatis mutandis, with $x,y$ interchanged. Hence we have $|f(x)-f(y)| \le d(x,y)$. Now suppose $U$ is open, and let $V = f^{-1}(U)$. Let $v \in V$, we need to show that there is some open ball containing $v$ contained in $V$. We have $f(v) \in U$, hence since $U$ is open we have some $\epsilon&gt;0$ such that $f(v) \in (f(v)-\epsilon, f(v)+\epsilon) \subset U$. Now suppose $d(w,v) &lt; \epsilon$, then the above shows that $|f(w)-f(v)| \le d(w,v) &lt; \epsilon$, and so $f(w) \in U$. Hence $f(B(v,\epsilon)) \subset (f(v)-\epsilon, f(v)+\epsilon) \subset U$. Hence $V$ is open.
The metric $d_{\infty}(x,y) =\max_{i = 1,\ldots,n} |x_i-y_i|$
Another way of writing $\max_{i = 1,\ldots,n} |x_i-y_i|$ is $\max \{|x_i - y_i| \;|\; i = 1, \dots, n\}$, that is, we consider the $n$ numbers $|x_i - y_i|$ ($1 \leq i \leq n$) and take the maximal one of these.
The complete solution set of $[\sin^{-1}x]>[\cos^{-1}x]$ is
This is correct, and can be solved without graphing. Assume $1&gt;x&gt;0$: $$\arcsin x&gt;\arccos x$$ Since $\cos x$ is decreasing in that interval, applying $\cos$ changes the inequality direction: $$\cos(\arcsin x)&lt;x$$ $$\sqrt{1-x^2}&lt;x$$ $$1&lt;2x^2$$ $$x&gt;\frac{1}{\sqrt{2}}$$
Arithmetic progression within sequence: 1/2, 1/3, 1/4, 1/5, ......
Hint: The arithmetic progressions you found written in ascending order are: $\dfrac{1}{6}, \dfrac{2}{6}, \dfrac{3}{6}$ $\dfrac{1}{12}, \dfrac{2}{12}, \dfrac{3}{12}, \dfrac{4}{12}$ $\dfrac{1}{60}, \dfrac{2}{60}, \dfrac{3}{60}, \dfrac{4}{60}, \dfrac{5}{60}$ See if you can generalize this. Spoiler below. For any integer $n$, let $L_n = \text{lcm}(1,2,\ldots,n)$. Then, the numbers $\dfrac{1}{L_n}, \dfrac{2}{L_n}, \ldots, \dfrac{n}{L_n}$ are an arithmetic progression and are all in the sequence since $\dfrac{L_n}{n}$ is a positive integer. You could also use $\dfrac{1}{n!}, \dfrac{2}{n!}, \ldots, \dfrac{n}{n!}$.
$\ker(f\otimes g)=\langle\{a\otimes b:(a\in \ker f \text{ or }(b\in\ker g)\}\rangle$
Let's take a closer look: when you write $(f\otimes g)(t)=(\widetilde{f}\otimes g)(t)$ you are assuming that LHS and RHS lie in the same module — i.e. that the natural map $\operatorname{Im}f\otimes B\to A\otimes B$ is a monomorphism. Of course the map $\operatorname{Im}f\to A$ is a monomorphism, but it's tensor product with $B$ needn't be.
suggestion for video lectures on algebraic topology
I think it is a bit late:-), but if you still interested in this topic I would definitely recommend this video course: https://www.youtube.com/watch?v=XxFGokyYo6g&amp;list=PLpRLWqLFLVTCL15U6N3o35g4uhMSBVA2b
G is an abelian group. Prove $G^{(n)}$ is a subgroup of G
To show that $H$ is a subgroup of $G$, you only need to show that $1_G \in H$, If $g, h \in H$, then $gh \in H$, and If $g \in H$, then $g^{-1} \in H$. Some people prefer to combine 2 and 3 together and just prove If $g, h \in H$, then $gh^{-1} \in H$. You can use whichever way you find easier. I'll prove all 3 things for the case $H = G^{(n)}$. $1_G \in G^{(n)}$ because $1_G^n = 1_G$. If $g, h \in G^{(n)}$, then $(gh)^n = g^n h^n = 1_G 1_G = 1_G$, so $gh \in G^{(n)}$. If $g \in G^{(n)}$, then $(g^{-1})^n = (g^n)^{-1} = 1_G^{-1} = 1_G$, so $g^{-1} \in G^{(n)}$. Therefore, $G^{(n)}$ is a subgroup of $G$. Note that we used the fact that $G$ is abelian in the proof of 2.
Definition of upper integral
Yes. It's certainly true if $\int f\,d\mu=\infty$; assume then that $\int f\,d\mu&lt;\infty$. Just to simplify the notation we can assume that $f&gt;0$ (consider $X'\subset X$). And since $f$ is integrable the set where $f=\infty$ has measure zero, so we can also assume $f&lt;\infty$. For $\lambda&gt;1$ and $n\in\Bbb Z$ let $$E(\lambda,n)=\{x\,:\,\lambda^n\le f(x)&lt;\lambda^{n+1}\}.$$ Let $$g_\lambda=\sum_{n\in\Bbb Z}\lambda^{n+1}\chi_{E(\lambda,n)}.$$Then $g_\lambda$ is elementary and $g_\lambda&gt;f$. In fact we have $$f&lt;g_\lambda\le\lambda f,$$so $g_\lambda\to f$ pointwise as $\lambda$ decreases to $1$. And for $1&lt;\lambda\le 2$ we have $g_\lambda\le\lambda f\le 2f$. Since $2f$ is integrable, dominated convergence shows that $\int g_\lambda\,d\mu\to\int f\,d\mu$ as $\lambda$ decreases to $1$.
Show that every open set A in a metric space (X, d) is the union of closed sets.
Your proof is perfectly fine. However, here's a proof that every open set is an union of at most countably many closed sets: Be $U$ an open set. First, let's consider the case $\partial U \ne \emptyset$. Then for any $n\in\mathbb N$ we define $D_n$ to be the set of points whose distance to $\partial U$ is less than $1/n$. Note that $D_n$ is an open set (it's the union of the open balls with radius $1/n$ around all points of $\partial U$). Next we define $C_n = U\setminus D_n$. Now $C_n$ is a closed set, because its complement is open (it's the union of the open set $D_n$ and the open exterior of $U$). By definition, $C_n\subset U$. Now $\bigcup_{n\in N} C_n = U$. This is because for any $x\in U$, $d(x,\partial U)&gt;0$ (because $U$ is open), therefore there exists an $n\in\mathbb N$ such that $d(x,\partial U)&gt;1/n$, and thus $x\notin D_n$. Therefore $x\in U\setminus D_n = C_n$. Thus the $C_n$ form a family of closed sets whose union is $U$. Remains the case $\partial U = \emptyset$. But in that case, $\partial U\subseteq U$ and thus $U$ is closed, and thus trivially the union of closed sets (namely the union of itself).
Fourier Analysis Help - $\mathcal L^2$
For 1, write $\mathbb{Z}$ as the union of two disjoint infinite subsets, $\mathbb{Z} = A \cup B$ and look at the subspaces spanned by $\{ e_k \}_{k\in A}$ and $\{ e_k \}_{k\in B}$. For 2, think some more about 1.
Isomorphism between Hilbert spaces
Since $E$ has Lebesgue measure $0$, the restriction $\rho \colon L^2(\Omega,\mathcal{O}) \to L^2(\widetilde{\Omega},\mathcal{O})$ is an isometry. So it remains to see that it is surjective, hence that for every $f\in L^2(\widetilde{\Omega},\mathcal{O})$, every point $a\in E$ is a removable singularity. Since $E$ is finite, there is an $R &gt; 0$ such that $D_{2R}(0) \subset \Omega$ and $\lvert a-b\rvert &gt; R$ for all $b \in E\setminus\{a\}$. To simplify the notation, we may assume that $a = 0$. For every $0 &lt; r &lt; R$, then consider the annulus $A_r = \{ z : r &lt; \lvert z\rvert &lt; R\}$ and the restriction $\rho_r \colon L^2(\widetilde{\Omega},\mathcal{O}) \to L^2(A_r,\mathcal{O})$. As a restriction, $\rho_r$ is norm-decreasing, i.e. $$\lVert \rho_r(f)\rVert_{L^2(A_r)} \leqslant \lVert f\rVert_{L^2(\widetilde{\Omega})}$$ for all $f\in L^2(\widetilde{\Omega},\mathcal{O})$. On the annulus $A_r \subset \widetilde{\Omega}$, every $f\in L^2(\widetilde{\Omega},\mathcal{O}) \subset \mathcal{O}(\widetilde{\Omega})$ has a Laurent expansion $$f(z) = \sum_{n=-\infty}^\infty c_n z^n.$$ By the choice of $R$, the Laurent series converges in a punctured disk $\{ 0 &lt; \lvert z\rvert &lt; R+\varepsilon\}$, hence it converges uniformly on $A_r$. (Note: locally uniform convergence would suffice, but uniform convergence simplifies a few steps.) Now one needs to check that the monomials $(z^k)_{k\in\mathbb{Z}}$ are mutually orthogonal in $L^2(A_r,\mathcal{O})$. Then, we see that $$\lVert \rho_r(f)\rVert_{L^2(A_r)} = \int_{A_r} \lvert f(z)\rvert^2 \,d\lambda = \sum_{n=-\infty}^\infty \lvert c_n\rvert^2 \int_{A_r} \lvert z\rvert^{2n}\,d\lambda \leqslant \int_{\widetilde{\Omega}} \lvert f(z)\rvert^2\,d\lambda.$$ Letting $r \to 0$, the only way for the left hand side to remain finite is that $c_n = 0$ for all $n &lt; 0$, which means that the singularity of $f$ in $0$ is removable. That holds for every singularity $e\in E$, hence $\rho$ is surjective.
Find the GS to the following InHomo System
Did you try to solve the given equations ? $$-2G=AG$$ $$(A+2I)G=0$$ $$\left(\pmatrix {1 &amp; 3 \\ 1&amp;-1}+ \pmatrix {2&amp;0 \\0&amp;2} \right)G=0$$ $$\pmatrix {3 &amp; 3 \\ 1&amp;1} \pmatrix {g_1 \\g_2}=0$$ $$\implies g_1=-g_2$$ $$\implies G=k\pmatrix {1 \\-1}$$ Plug this in the first equation: $$-2E+G=AE+ \pmatrix {1 \\0}$$ $$(A+2I)E=G- \pmatrix {1 \\0}$$ $$(A+2I)E= \pmatrix {k-1 \\ -k}$$ $$\pmatrix {3 &amp; 3 \\ 1&amp;1} \pmatrix {e_1 \\e_2}= \pmatrix {k-1 \\ -k}$$ $$-3k=k-1 \implies k =\dfrac 14$$ $$e_1+e_2 =-\dfrac 14$$ If you choose $e_2=0$ then $e_1=-\dfrac 14$. and $E=-\dfrac 14 \pmatrix {1 \\0}$.
Proving that for a quadratic residue $n \pmod p$, there exists an $a$ such that $a^2-n$ is a quadratic nonresidue $\pmod p$.
Suppose not. Then the map $f:x \mapsto x-n$ sends the squares to the squares and therefore the nonsquares to the nonsquares. This property stays true if we reiterate the map. Take any point $a$ and consider reiterating $f$ on $a$, i.e., $f^k(a)$. This goes through all the points of $\mathbb{F}_p$ (Show that for every $b\in\mathbb{F}_p$, $f^{\frac{b-a}{n}}(a) = b$). Therefore all elements of $\mathbb{F}_p$ have the same quadratic residuosity as $a$ which is absurd. This does not prove the harder statement saying that the probability that $a^2-n$ is a non-residue over random $a$'s is 1/2. For that you can find the original proof here, or an english version here.
What is the value of $\tan (\theta) = 1/-1$?
Let $z=-1+i=x+yi,$ with $x=-1$ and $y=1$. Clearly $z$ is in the second quadrant, since $x&lt;0$ and $y&gt;0$. The polar form is $z=r(\cos\theta+i\sin\theta)$. As you noted, $r=\sqrt2$. Therefore, $\cos\theta=-\dfrac1{\sqrt2}$ and $\sin\theta=\dfrac1{\sqrt2}$, so we can take $\theta=\dfrac{3\pi}4$. Note that $\dfrac{\pi}2&lt;\theta&lt;\pi$, so $\theta$ is indeed in the second quadrant.
Contraction of extension of prime ideals in an integral ring extension
Thanks to user26857's helpful comment above, I think that I'm able to provide an answer. By the lying-over property, there is a prime ideal $q$ of $S$ such that $q^c=p$. Then: $$p^{ec} = q^{cec}=q^c=p.$$ This is wrong is $S$ is not integral over $R$, for instance $p=2\Bbb Z \triangleleft R=\Bbb Z \subset S=\Bbb Q$.
Positive Semidefinite Matrix
Yes, this is called the schur product theorem. Read about it here
What is the opposite of ^?
Looks like your level is $$\sqrt[3]{\frac{XP}{10}}.$$ If you have trouble computing cube roots, try $$\exp\left(\frac13\cdot\ln\frac {XP}{10}\right).$$