title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
how to convert surface to vector field to use it in line integral
The unit circle centered at the origin is given by parametric equations $x= \cos(t), y= \sin(t)$ with $t$ going from 0 to $2\pi$ so $dx= -\sin(t)dt$ and $dy= \cos(t)dt$. $ds^2= (dx)^2+ (dy)^2$ so $ds= \sqrt{(dx)^2+ (dy)^2}= \sqrt{(-\sin(t)dt)^2+ (\cos(t)dt)^2}= \sqrt{\sin^2(t)+ \cos^2(t)}dt= dt$. (More simply, the angle t, measured in radians, measures the circumference of the circle. With "t" in radians, $ds= dt$). $\int xy ds= \int_0^{2\pi} cos(t)dt$. There is no $\int dx+ dy$.
Minimum Volume Enclosing Ellipsoid and convex function
No, certainly not. Write $f = f_1 + f_2$, where $f_1 = \delta_L : \mathbb R^n \to \{0,\infty\}$ is the indicator function of $L$ and $f_2 : \mathbb R^n \to \mathbb R$ is an arbitrary convex function. Then, the minimum value of $f$ is the minimal value of $f_2$ on $L$ and this can be any point in $L$.
Confusion about calculating cdf and pdf of random variable
I didn't check your arithmetic, but I got different results, which are consistent. $P(Y\le y)=\frac{3y}{2}-\frac{5y^2}{8}$ for $0\le y\le 1$. $P(Y\le y)=\frac{y}{2}-\frac{y^2}{8}+\frac{1}{2}$ for $1\le y\le 2)$. Note that it is continuous at $y=1$ where $P=\frac{7}{8}$ so getting density is no problem. Derivation: $Y\le 1$, $\int_{-y}^0(1-|x|)dx+\int_0^\frac{y}{2}(1-|x|)dx$ $1\le Y\le 2$, $\int_\frac{1}{2}^\frac{y}{2}(1-|x|)dx+\frac{7}{8}$
Can Eigen vector be a zero vector?
The eigenspace contains the zero vector, but the zero vector is often not considered to be an eigenvector (e.g. because if it were, then every linear map would have all elements of the base field as eigenvalues, since $\lambda\cdot0=0$ for all $\lambda$...)
How to transform a general polynomial function into a multilinear function?
You understood the question posed in the image wrongly. It is ill-posed in your title. I think that it is an application of a general result in Fourier theory: On every Hilbert space of functions, you can (1) write every function uniquely (2) as a fourier expansion. (with respect to a given orthonormal basis. Note that such a basis exists in every vector space if you assume the Axiom of Choice and apply the Gram-Schmidt process)
Does $\alpha$ need to be transcendental over F?
Actually, they use that hypothesis in the part a polynomial in $\alpha$ with coefficients that are polynomials in $\beta$ can be formally rewritten as a polynomial in $\beta$ with coefficients that are polynomials in $\alpha$ to conclude that this new polynomial (seen in $(F[\alpha])[x]$) is not the zero polynomial. This is necessary to conclude that $\beta$ satisfy a non-zero polynomial over $F(\alpha)$ so $\beta$ is algebraic over $F(\alpha)$.
In how many ways can 3 employees visit 40 locations
For each location you have 3 chooises (employees)and you have 40 locations so $3^{40}$?
$\gcd(m^2,n^2)$ = $(\gcd(m,n))^2$
Another way: Let $\dfrac nN=\dfrac mM=d$ so that $(M,N)=1$ In that case, $N^2,M^2$ cannot share a common factor $>1$ $\implies(M^2,N^2)=1$ Method$\#:2$ Let the highest exponent of prime $p$ that divides $m,n$ be $a,b$ respectively So, the highest exponent of $p$ in $(m,n)$ will be min$(a,b)=c$(say) Similarly the highest exponent of prime $p$ in $(m^2,n^2)$ will be min$(2a,2b)=2c$ This will hold true for any prime that divides $mn$
Let $X$ be a topological space and $U$ be aproper dense open subset of $X$. Pick the correct statement from the following
For (a), your counterexample works. For (b), $X=[0,1]$ with $U=(0,1)$ works as a counterexample. For (c), $X=[0,1)$ with $U=(0,1)$ works again. For (d), the statement is correct as $X-U$ is a closed subset of $X$, hence compact.
Given a set of integers find two disjoint subsets $I$ and $J$ so $|I|+|J|=k$ and $\sum\limits_{i \in I}x_i = \sum\limits_{j \in J}x_j = n^2$
Here is a naive algorithm. Verify that $\sum{x_i}\leq2*n²$ If yes, then possible_sets = [(0,0,0)] # ($\sum{x_i}$, $\sum{x_j}$, |I|+|J|) for x in {$x_k$} for $s_i$,$s_j$,$nb$ in possible_sets if $nb<k$ if $s_i+x<n²$ if $nb+1=k$ and $s_i+x=s_j=n²$ return True possible_sets += (s_i+x, s_j, nb+1) if $s_j+x<n²$ if $nb+1=k$ and $s_i=s_j+x=n²$ return True possible_sets += (s_i, s_j+x, nb+1) return False This can be optimized using symmetry between I and J
How to prove that the derivative of a Lipschitz function is always $\leq$ L
From $ |\frac{f(x)-f(c)}{x-c}|\leq L$ we get with $x \to c$ that $|f'(c)| \le L$, hence $$ f'(c) \le |f'(c)| \le L.$$
A Problem Based on Ellipse
All ellipses are affine images of the unit circle. Moreover, intersections, tangents and parallel lines are all affine properties, and the affine image of an ellipse is another ellipse, so w.l.o.g. we need only prove this for the unit circle. If the given pair of lines are parallel, then the third side of the triangle collapses to a point and $E_2=E_1$, so we assume that the given lines aren’t parallel. Rotate the coordinate system so that the acute bisector of the lines is parallel to the $x$-axis and let $\theta$ be the acute angle between the lines. The two parallel lines through a point $A=(\cos t,\sin t)$ on the circle are $$x \sin{\frac\theta2} \pm y \cos{\frac\theta2} = \pm \sin\left(t-\frac\theta2\right).$$ The other two vertices of the triangle can be found by reflecting $A$ in the two perpendiculars through the origin of these two lines: $$B = (-\cos(t-\theta), \sin(t-\theta)) \\ C = (-\cos(t+\theta),\sin(t+\theta)).$$ The equation of the line through $B$ and $C$, after a bit of trigonometric manipulation, is $$2x\cos t\sin\theta - 2y\sin t\sin\theta + \sin{2\theta} = 0,$$ which can be further simplified to $$x \cos t - y \sin t + \cos\theta = 0$$ by dividing by $2\sin\theta$. These lines are at a constant distance of $\cos\theta$ from the origin, so are all tangent to the circle $x^2+y^2=\cos^2\theta$.
$iV$ is open for $V$ open in real TVS
Your result is false : multiplication by $i$ may not be continuous. Let $E_1$ and $E_2$ be two real topological vector spaces, with same underlying set but two different topologies. Let $E = E_1 \times E_2$ with the natural struture of real topological vector space and with complex structure : $$i : E_1 \times E_2 \longrightarrow E_1 \times E_2, \quad (x_1,x_2) \longmapsto (-x_2,x_1).$$ Then $E$ is not a topological complex vector space.
Proving that $\lim\limits_{x\to 4}\frac{2x-5}{6-3x} = -\frac{1}{2} $
Given any $\varepsilon > 0$, let $\delta = \min\{1, \varepsilon/M\} > 0$, where $M$ is some fixed magical positive constant. While doing this proof, we will eventually figure out what $M$ should be, then go back and erase $M$ and replace it with this constant. Now if $0 < |x - 4| < \delta$, then observe that: \begin{align*} \left| \frac{2x - 5}{6 - 3x} - \frac{-1}{2} \right| &= \left| \frac{4 - x}{6(x - 2)} \right| \\ &= \frac{1}{6|x - 2|}|x - 4| \\ &< \frac{1}{6|x - 2|} \cdot \frac{\varepsilon}{M} &\text{since } |x - 4| < \delta \leq \frac{\varepsilon}{M} \\ \end{align*} Now we take a break from our proof and try to figure out a way to bound $\frac{1}{6|x - 2|}$ by a constant $M$ given that $x$ is at most $1$ away from $4$, implying that $|x - 2|$ won't be too close to zero. Indeed, if $|x - 4| < 1$, then observe that: \begin{align*} |x - 2| &= |(x - 4) - (-2)| \\ &\geq ||x - 4| - \left| -2 \right|| &\text{by the reverse triangle inequality} \\ &= 2 - |x - 4| &\text{since } |x - 4| < 1 \leq 2 \\ &> 2 - 1 &\text{since } |x - 4| < 1 \\ &= 1 \end{align*} so that: $$ \frac{1}{6|x - 2|} < \frac{1}{6} $$ Thus, by taking $M = \frac{1}{6}$, we can finish up our original proof: Given any $\varepsilon > 0$, let $\delta = \min\{1, 6\varepsilon\} > 0$. Then if $0 < |x - 4| < \delta$, observe that: \begin{align*} \left| \frac{2x - 5}{6 - 3x} - \frac{-1}{2} \right| &= \left| \frac{4 - x}{6(x - 2)} \right| \\ &= \frac{1}{6|x - 2|}|x - 4| \\ &< \frac{1}{6|x - 2|} \cdot 6\varepsilon &\text{since } |x - 4| < \delta \leq 6\varepsilon \\ &< \frac{1}{6} \cdot 6\varepsilon &\text{since } |x - 4| < \delta \leq 1 \\ &= \varepsilon \end{align*} as desired. $~\blacksquare$
Is this "snake lemma" true in derived category?
In general, neither (a) nor (b) is true. I'll give three examples to illustrate various points. In all of these, $k$ will be a field, which I'll take to have characteristic two (just so that I don't need to be careful with signs; it's not really important!). (1) In general, your diagram doesn't even have to induce a morphism of triangles in the derived category, as the "connecting" square $$\require{AMScd} \begin{CD} Z_1 @>>> X_1[1]\\ @VVV @VVV\\ Z_2 @>>> X_2[1] \end{CD}$$ may not commute. If you don't insist that it does commute, then there are very easy counterexamples to (a). For example, take $Y$ to be a contractible complex ($\dots\to0\to k\stackrel{1}{\to}k\to0\to\dots$, say) and $X$ a subcomplex such that $Y/X$ is not acyclic ($\dots\to0\to 0\to k\to0\to\dots$, say). Then $$\begin{CD} 0 @>>> X @>>> Y @>>> Y/X @>>>0\\ @.@VV1V @VV1V @VV0V @.\\ 0 @>>> X @>>> Y @>>> Y/X @>>>0 \end{CD}$$ commutes up to homotopy, but the cone of the middle vertical arrow is acyclic, whereas the cone of the third arrow is not. However, even if you insist that the connecting square commutes, (a) and (b) can still be false. (2) Fairly simple counterexamples to (b) can be constructed by taking a map from a triangle to a rotation of itself, as in: $$\begin{CD} X @>>> Y @>>> Z @>\gamma >> X[1]\\ @VV0V @VV0V @VV\gamma V @VV0V\\ Y @>>> Z @>>> X[1] @>>> Y[1] \end{CD}$$ The cones are $Y\oplus X[1]$, $Z\oplus Y[1]$ and $Y[1]$, and there are fairly easy examples where there is no triangle $$Y\oplus X[1]\to Z\oplus Y[1]\to Y[1]\to Y[1]\oplus X[2].$$ For example, let $R=\mathbb{Z}[t]$, and $I$ the ideal $(2,t)$, and start with the obvious triangle $$I\to R\to R/I\to I[1]$$ in the derived category of $R$-modules. Then there can't be a triangle $$R\oplus I[1]\to R/I\oplus R[1]\to R[1]\to R[1]\oplus I[2],$$ as its cohomology exact sequence would be $$\dots\to 0\to I\to R\to R\to R\to R/I\to0\to\dots,$$ and no such exact sequence exists, as the kernel of $R\to R/I$ would have to be $I$, and there is no surjective map $R\to I$. (3) Let $S=k[x,t]/(xt^2)$, and in the derived category of $S$-modules form the "homotopy Cartesian square" $$\begin{CD} S @>xt >> S\\ @VxVV @V\alpha VV\\ S @>\beta >> Y \end{CD}$$ (i.e., $Y$ is the complex $\dots\to0\to S\stackrel{ \begin{pmatrix}xt\\x\end{pmatrix}}{\to} S\oplus S\to0\to\dots$, with $S\oplus S$ in degree zero, and $\alpha$ and $\beta$ are inclusions of the first and second summands). Completing to a morphism of triangles gives $$\begin{CD} S @>xt >> S @>>> Z @>>>S[1]\\ @VxVV @V\alpha VV @V1VV @VxVV\\ S @>\beta >> Y @>>> Z @>>> S[1] \end{CD}$$ where the cones of the first two vertical maps are both quasiisomorphic to $\dots\to0\to S\stackrel{x}{\to}S\to0\to\dots$, so this is not yet a counterexample to (a). However, we can replace the first vertical arrow by multiplication by $x+xt$: $$\begin{CD} S @>xt >> S @>>> Z @>>>S[1]\\ @Vx+xt VV @V\alpha VV @V1VV @Vx+xt VV\\ S @>\beta >> Y @>>> Z @>>> S[1] \end{CD}$$ and this is still a morphism of triangles, but now the cone of the first vertical map is $\dots\to0\to S\stackrel{x+xt}{\to}S\to0\to\dots$, whereas the cone of the second vertical map is still $\dots\to0\to S\stackrel{x}{\to}S\to0\to\dots$, and it can be checked that there is no quasiisomorphism between these.
Direct sum counterexample
Hint: Consider four distinct lines (passing through $(0,0)$) in $\mathbb R^2$.
Finding if the improper integral $\int_1^{\infty} \frac{3\arctan(x)\,dx}{ \sqrt{x^4+1}}$ converges or diverges.
Notice that $$\left\vert \frac{3\arctan x}{\sqrt{x^4+1}}\right\vert\le \frac{3\pi}2\frac1{x^2}$$ and the integral $$\int_1^\infty \frac{dx}{x^2}$$ exists which allows you to conclude.
prime decomposition in galois extensions
Yes. I suppose that the definition of a prime ideal $\mathcal{P}$ being "above $p$" you are using is $\mathcal{P} \cap \mathbb{Z} = (p)$. From this it is clear that if $\mathcal{P}$ lies above $p$, then it contains $p\mathbb{Z}$, and -- since it is an ideal -- hence also $p \mathcal{O}_K$. But recall that in a Dedekind domain "to contain is to divide", hence $\mathcal{P}$ divides $p\mathcal{O}_K$ and appears in the (unique!) prime factorization of $\mathcal{O}_K$. Note that this does not require that $K/\mathbb{Q}$ be Galois (as appears in the title but not the body of the question), and it has nothing to do with algebraic number fields per se: in general, for a finite extension of Dedekind domains $S/R$, the prime ideals of $S$ lying over a given prime $\mathfrak{p}$ of $R$ are exactly the ones appearing in the factorization of $\mathfrak{p}S$.
Involutions in $\mathbb{Q}$.
Show $\overline{1} = 1$ Show $\overline{k} = k$ for $k \in \mathbb{Z}$ using ii) Show $\overline{p/q} = p/q$ with iii).
How do get combination sequence formula?
The following is a simple way of solving the problem. The recurrence has the shape $$y_{n+1}=4y_n-1$$ It would be nice if the recurrence looked like $$u_{n+1}=4u_n$$ Then life would be easy. Unfortunately, we have that pesky $-1$ that spoils things! So let's try for the next best thing. Could we transform our recurrence to something of the shape $$y_{n+1}-a=4(y_n-a)$$ where $a$ is a constant? Do a bit of arithmetic. We get $y_{n+1}=4y_n-3a$. If we choose $a=1/3$, we can get our recurrence into the desired shape. Thus we can rewrite our recurrence as $$y_{n+1} -\frac{1}{3}=4\left(y_n -\frac{1}{3}\right)$$ Temporarily, let $$u_n=y_n-\frac{1}{3}$$ for all $n$. Then $$u_{n+1}=4u_n$$ Now we have to decide whether we start our indices at $0$ or at $1$. Like many mathematicians, I prefer to start at $0$. (Starting at $1$ would not change things very much.) We have $y_0=1$, so $u_0=1-1/3=2/3$. And because to get the "next" $u$ we multiply the previous $u$ by $4$, we have $$u_n=\left(\frac{2}{3}\right)4^n$$ But $y_n=u_n+1/3$. It follows that $$y_n=\left(\frac{2}{3}\right)4^n +\frac{1}{3}$$ Comment: Essentially the same method works for the recurrence $y_{n+1}=cy_n+d$ where $c$ and $d$ are constants, and the idea can be adapted to deal with many other situations.
measure theory problems and step functions
One question per post. For $0\le x\le y\le1$ $$ |f_n(x)-f_n(y)|=\Bigl|\int_x^yf_n'(t)\,dt\Bigr|\le\|f'_n\|_1. $$ Thus $\{f_n\}$ is equicontinuous and has a uniformly convergent subsequence. Since $\{f_n\}$ converges to $0$ in $L^1$, the uniform limit of the aforementioned subsequence must be $0$. This argument can be carried out for any subsequence of $\{f_n\}$, that is, we can prove that any subsequence of $\{f_n\}$ has a subsequence that converges uniformly to $0$. This imples that the whole sequence $\{f_n\}$ converges uniformly to $0$.
integrable function in Lebesgue sense
Use Taylor series to see that $e^x-1 \geq x$ and $e^x-1 \geq x^3/6$, for $x \geq 0$. So $$0 \leq \frac{x}{e^x-1} \leq \frac{x}{\max \{ x,x^3/6 \}} = \min \left \{ 1,\frac{6}{x^2} \right \}.$$ Can you work from here? Another approach, which actually gives a way to calculate the integral: $$\frac{x}{e^x-1}=\frac{x e^{-x}}{1-e^{-x}}=x e^{-x} \sum_{n=0}^\infty e^{-nx}.$$ Hence $$\int_0^\infty \frac{x}{e^x-1} dx = \int_0^\infty x e^{-x} \sum_{n=0}^\infty e^{-nx} dx.$$ Then you can interchange summation and integration using monotone convergence. This leaves you to compute $$\sum_{n=0}^\infty \int_0^\infty x e^{-(n+1)x} dx.$$ Using integration by parts, you get $$\sum_{n=1}^\infty \frac{1}{n^2}.$$ One shows that this series converges in calc II; using Fourier series methods one can show that the value is $\frac{\pi^2}{6}$.
Example of ideal generated by two elements
We have $3+-2=1 \in (3,2)$, hence the generated ideal contains $1$, hence it contains $\mathbb Z$
Ricci flow reference with pictures
There are some really fine pictures of different aspects of singularities of Ricci flows in these lecture notes of Peter Topping. Good luck with your lecture!
What is the probability of at least one of the colours not appearing in the series of throws?
Let X be the probability of all showing up out of 3 throws and Y the probability of at least one not showing up out of 3 throws. In this way you can write $$P(Y) = 1 - P(X)$$ $$P(X) = \frac{3}{8}\frac{1}{8}\frac{4}{8}3!$$ Hence: $$P(Y) = 1 - \frac{12}{8^3}3!$$
Number of ways of choosing $x,y,z$ such that $z>max(x,y)$
You can first choose the numbers and then label them. For example choose three numbers, say $2,5,7$. You can set $z=7$ and then you have two choices. Either $x=2$, $y=5$ or $x=5$, $y=2$. Another case is that you choose two numbers $4,6$, set $z=6$ and $x=y=4$. Therefore the answer is $2\cdot C(10,3)+(10,2)$.
How to represent sign function in range $[-1, 1]$
You can try $$ f(x)=\mathrm{sgn}\!\left(\mathrm{sgn}(x)+\frac12\right) $$
Finding a circle that touch two other circles and a line
Let's say $\epsilon$ is the line defined by A and B and $K_1,K_2$ and $K_3$ the centers of the given circles. Then we have the following 3 equations that will help us define $x_3,y_3,r_3$ $1$. $\sqrt{(x_3-x_2)^2+(y_3-y_2)^2}=r_3+r_2$ or $\sqrt{(x_3-x_2)^2+(y_3-y_2)^2}=|r_3-r_2|$ That is, the distance of the centers $K_2,K_3$ equals the sum of the radius $r_3,r_2$ if the circles are tangent outwardly or the $|r_3-r_2|$ if they're tangent inwardly. $2$. $\sqrt{(x_3-x_1)^2+(y_3-y_1)^2}=r_3+r_1$ or $\sqrt{(x_3-x_1)^2+(y_3-y_1)^2}=|r_3-r_1|$ $3$. The distance of the $K_3$ from $\epsilon$ equals $r_3$ if you write $\epsilon :Ax+By+C=0$ then $r_3=\frac{|Ax_3+By_3+c|}{\sqrt{A^2+B^2}}$
The expected value of the minimum of two random variables conditional on the smaller random variable.
Partial answer: Suppose that $Y$ is an exponential distribution with rate $\mu$. Note that for $z>0$, we have \begin{align*} P(Z>z|Z=X)&=P(Z>z|X\leq Y)=\frac{P(Z>z,X\leq Y)}{P(X\leq Y)}\\&=\frac{P(z<X\leq Y)}{P(X\leq Y)}\\&=\frac{\int_z^\infty\int_x^\infty\lambda\mu e^{-\lambda x}e^{-\mu y}\,dy\,dx}{\int_0^\infty\int_x^\infty\lambda\mu e^{-\lambda x}e^{-\mu y}\,dy\,dx}\\&=e^{-(\lambda+\mu)z}=P(X>z,Y>z)\\&=P(Z>z). \end{align*} So $Z$ and $\{Z=X\}$ are independent, hence $$E[Z|Z=X]=E[Z]=\frac1{\lambda+\mu}.$$ For the same reason, $$E[Z|Z=Y]=E[Z]=\frac1{\lambda+\mu}.$$
Solve functional equation $f(x_1 x_2) = g_1(x_1) g_2(x_2)$
I assume $f, g_1, g_2$ take values in the positive reals. If so, consider instead the functional equation $$f(x_1 + x_2) = g_1(x_1) + g_2(x_2)$$ where $f, g_1, g_2$ are functions from the reals to the reals. I claim that studying this functional equation is equivalent to studying the previous one; given any solution $f, g_1, g_2$ to this functional equation, the triple $$e^{f(\log x)}, e^{g_1(\log x)}, e^{g_2(\log x)}$$ is a solution to the previous functional equation, and conversely. In the special case that $f = g_1 = g_2$, this functional equation is known as the Cauchy functional equation $$f(x + y) = f(x) + f(y).$$ The Cauchy functional equation is well-known to have "exotic" solutions besides the obvious solutions $f(x) = rx$; every solution comes from taking a Hamel basis of $\mathbb{R}$ as a vector space over $\mathbb{Q}$ (which exists conditional on the axiom of choice) and letting $f$ act arbitrarily on this basis. In general, note that the space of solutions is a real vector space, and moreover it contains families of constant solutions $c = d + (d-c)$. By replacing $(f, g_1, g_2)$ with $(f - f(0), g_1 - g(0), g_2 - g(0))$, we may assume WLOG that $f(0) = g_1(0) = g_2(0) = 0$. Then $$g_1(x) = f(x + 0) = f(x) = f(0 + x) = g_2(x)$$ so we reduce to the Cauchy functional equation.
How many ways can 10 identical buttons, 10 identical bows, and 10 identical beads be distributed to 4 different people?
Count how many ways there are to distribute $10$ identical buttons between the four people. Separately count how many ways there are to distribute $10$ identical bows between the four people. Separately count how many ways there are to distribute $10$ identical beads between the four people. Apply multiplication principle.
"Averaging" transformation matrices?
Unfortunately, there's no really good way to average such transformations. In particular, in three dimensions there is no possible averaging operation $\mathcal A$ with all of the following natural and desirable properties: $\mathcal A$ is symmetric -- that is, $\mathcal A(M_1,M_2)=\mathcal A(M_2,M_1)$ for all $M_1$ and $M_2$. $\mathcal A$ is invariant under rotations of the coordinate system. Whenever the inputs to $\mathcal A$ are both rotation matrices (or invertible or has determinant 1), the output is a also a rotation matrix (or invertible or has determinant 1). So at least one of these properties has to be given up. The only one that it really makes any sense to do without is (2), but even so the resulting outcome is going to be discontinuous and rather non-intuitive.
On a system of ODE's
Write the matrix $J$ by blocs so with $$R=\begin{pmatrix}-2&1\\0&-2\end{pmatrix}$$ we have $$J=\operatorname{diag}((1),R)$$ Now since $R=-2I_2+N$ then $$\exp(xR)=e^{-2x}(I_2+xN)=\begin{pmatrix}e^{-2x}&xe^{-2x}\\0&e^{-2x}\end{pmatrix}$$ so $$\exp(xJ)=\operatorname{diag}((e^x),\exp(xR))$$ and finally the general solution of the system is $$Y(x)=P\exp(xJ)P^{-1}Y(0)$$
Product of reflections is a rotation, by elementary vector methods
Write $c := \cos\theta$, $s := \sin\theta$, $\mathbf{w} := \mathbf{u}\times\mathbf{v} = s\mathbf{n}$, and $T := T_\mathbf{u}\left(T_\mathbf{v}\right)$, so that we have ... $$\begin{align} T(\mathbf{x}) &=\mathbf{x}-2(\mathbf{x}\cdot\mathbf{u})\mathbf{u}-2(\mathbf{x}\cdot\mathbf{v})\mathbf{v} + 4c(\mathbf{x}\cdot\mathbf{v})\mathbf{u}\\ R(\mathbf{x}) &= (2c^2-1)\mathbf{x} + 2 s^2 (\mathbf{x}\cdot\mathbf{n}) \mathbf{n} + 2 s c (\mathbf{x}\times\mathbf{n}) \\ &= (2c^2-1)\mathbf{x} + 2 (\mathbf{x}\cdot\mathbf{w})\mathbf{w} + 2 c (\mathbf{x}\times\mathbf{w}) \\ \end{align}$$ Decomposing $\mathbf{x}$ as $p\mathbf{u} + q\mathbf{v} + r \mathbf{w}$, we can get fairly directly ... $$\mathbf{x}\cdot\mathbf{u} = p + q c \qquad \mathbf{x}\cdot\mathbf{v}=pc+q \qquad \mathbf{x}\cdot\mathbf{w}=rs^2 \qquad (\star)$$ $$\mathbf{x}\times\mathbf{w} = \mathbf{x}\times \left(\mathbf{u}\times\mathbf{v}\right) = (\mathbf{x}\cdot\mathbf{v})\mathbf{u}-(\mathbf{x}\cdot\mathbf{u})\mathbf{v} \qquad (\star\star)$$ Then it's straightforward to show that the difference of the transformations vanishes: $$\begin{align} T(\mathbf{x}) - R(\mathbf{x}) &=\mathbf{x}-2(p+qc)\mathbf{u}-2(pc+q)\mathbf{v} + 4c(pc+q)\mathbf{u}\\ &-\left( (2c^2-1)\mathbf{x} + 2 r s^2 \mathbf{w} + 2 c \left( (pc+q)\mathbf{u} - (p+qc)\mathbf{v} \right) \right) \\[6pt] &= (2-2c^2)\;\mathbf{x} + 2 \left(-p-qc+2pc^2+2qc-pc^2-qc\right)\;\mathbf{u} \\ &+ 2\left(-pc-q+pc+qc^2\right)\mathbf{v} - 2 r s^2 \mathbf{w} \\[6pt] &= 2 s^2 \left( \mathbf{x} - p\mathbf{u} - q \mathbf{v} - r\mathbf{w} \right) \\[6pt] &= 0 \end{align}$$ and we conclude that the transformations are equivalent. $\square$ Edit. Without jumping immediately to the decomposition of $\mathbf{x}$, we can use the expansion in $(\star\star)$ to write $$\begin{align} \frac{T(\mathbf{x})-R(\mathbf{x})}{2s^2} \;\;&=\;\; \mathbf{x} \;-\; \left( \; \frac{\mathbf{x}.( \mathbf{u} - c \mathbf{v} )}{s^2}\;\mathbf{u} \;+\; \frac{\mathbf{x}.( \mathbf{v} - c \mathbf{u} )}{s^2}\;\mathbf{v} \;+\; \frac{\mathbf{x}.\mathbf{w}}{s^2}\;\mathbf{w} \;\right) \end{align}$$ If you can "see" that the coefficients of $\mathbf{u}$, $\mathbf{v}$, $\mathbf{w}$ are the components of $\mathbf{x}$ ---which would be clear for orthogonal $\mathbf{u}$ and $\mathbf{v}$, for which $c=0$ and $s=1$--- then you're done. If not, note that you can arrive at this insight by solving the dot-product equations $(\star)$ for $p$, $q$, $r$.
Is $(a,f(a))$ a point of inflection if $f'(a)=0$ and $f''(a)=0$?
Not necessarily. A point of inflection is where $f''(a)$ changes sign. Simply knowing that $f''(a)=0$ does not imply a point of inflection, but it is necessary to have a point of inflection. As @mfl commented, $\displaystyle \frac{d^2}{dx^2}x^4=0$, but $f(x)=x^4$ doesn't have a point of inflection at $x=0$. Whereas, $\displaystyle \frac{d^2}{dx^2}x^5=0$ and it does have a point of inflection at $x=0$.
Geometry of General Linear Group
Viewing it as a norm on $n^2$-dimensional Euclidean space (rather than restricting to the general linear group), it is called the Frobenius norm (or inner product). For matrices with good spectral properties (and hence I think for all matrices), it coincides with the Schatten norm, which is the diagonal length of image of the unit box under multiplication by the matrix. Compare that with the operator norm, which is the length of the longest side of the image of the unit box. See the excellent answer by Eric Naslund here.
Infinite series with cos in numerator
Hint. One may write, for $|q|<1$, $\alpha \in \mathbb{R}$, $$ \sum_{n=1}^{\infty} q^n \cos(n\alpha)=\Re \sum_{n=1}^{\infty} (qe^{i\alpha})^n =\Re\: \frac{qe^{i\alpha}}{1-qe^{i\alpha}} $$ where we have used the standard evaluation of a geometric series.
How would I find the series generated by $e^x$ + $4x^2$? (exponential generating function)
Recall, an exponential generating function has the representation \begin{align*} \sum_{n=0}^\infty a_n\frac{x^n}{n!}\tag{1} \end{align*} The expression $4x^2$ can be written as $$8\frac{x^2}{2!}$$ to fit the representation in (1). The sequence of coefficients of the exponential generating function $e^x+4x^2$ is therefore \begin{align*} 1,1,\color{blue}{9},1,1,1,\ldots \end{align*}
How To Set The Lengths So It Is Equal To Expected Number
$a = b$ $c = \frac{a}{\sqrt2}$ So the distance from $x$ to $y$ is $b + c$ or $a + \frac{a}{\sqrt2} = 1.7071a$ Say you wanted to reduce the height from $x$ to $y$ from $10$ to $8$. $1.7071a = 10$ so $a = \frac{10}{1.7071} = 5.8579 = b$ $1.7071a = 8$ so $a = \frac{8}{1.7071} = 4.6863 = b$
If $\mathbb{R}=\bigcup_{n=1}^{\infty}E_n$, then the closure of some $E_n$ contains an interval
If $\mathbb{R} = \cup_{n=1}^\infty E_n$ where int$(\overline{E}) = \emptyset$ for all $n$ then $\mathbb{R}$ is a countable union of nowhere dense sets contradicting the Baire Category Theorem.
How large should my family of subsets be before I am guaranteed an element meeting some requirements?
(Updated 1:30pm EST when I noticed the $|Y|=m$ requirement...) I am not sure I understand your bound, but here's how I would do it. (I'm not sure if this is equivalent or different.) Let $n= |X|$ and $A = \{a_1, \dots, a_k\}$. Also, we'll say $F$ is feasible if $F$ contains an $m$-subset that does not intersect $A$, i.e. $\exists Y \in F$ s.t. $Y \subset X, |Y| = m, Y \cap A = \emptyset$. Since any subset of size $< m$ cannot make $F$ feasible, the maximum infeasible $F$ will contain all such subsets. There are $\sum_{i = 0}^{m-1} {n \choose i}$ of them. (Note that many of these will not intersect $A$, but they do not make $F$ feasible because the requirement includes $|Y|=m$.) Further, the maximum infeasible $F$ contains all $m$-subsets which intersect $A$. Total number of $m$-subsets $= {n \choose m}$ Total number of $m$-subsets which do not intersect $A = {n - k \choose m}$ So, total number of $m$-subsets which intersect $A = {n \choose m} - {n - k \choose m}$ Therefore the maximum infeasible $F$ has $\sum_{i = 0}^{m-1} {n \choose i} + {n \choose m} - {n - k \choose m}$ subsets. If $|F|$ is bigger than this number, then $F$ is guaranteed feasible.
Linear Maps and Basis of Domain
It says that once you know how $T$ acts on a basis, you know how it acts on ALL vectors $v\in V$. To see this, suppose we have defined $T$ on , $\left \{ v_{1}, v_{2},v_{3}\cdots v_{n}\right \}$, a basis for $V$: $Tv_{1}=w_{1}, Tv_{2}=w_{2}, \cdots ,Tv_{n}=w_{n}$ We can express $v$ as a linear combnation of the basis vectors, by writing $v=a_{1}v_{1}+a_{2}v_{2}+a_{3}v_{3}\cdots a_{n}v_{n}$. Now apply $T$: $Tv=T(a_{1}v_{1}+a_{2}v_{2}+a_{3}v_{3}\cdots a_{n}v_{n})$. But $T$ is linear so we get $Tv=a_{1}Tv_{1}+a_{2}Tv_{2}+a_{3}Tv_{3}\cdots a_{n}Tv_{n}$. So the effect of $T$ on our arbitrary $v$ only depended on how we defined $T$ on the basis $\left \{ v_{1}, v_{2},v_{3}\cdots v_{n}\right \}$
Problem about a simple division
$$\begin{align} \color{green}{1356}:339 &= \mathbf 4&\text{ remainder }&\color{red}0\\ {\color{red}0}\color{green}2:339 &= \mathbf 0&\text{ remainder }&{\color{red}2}\\ {\color{red}2}\color{green}8:339 &= \mathbf 0&\text{ remainder }&\mathbf {28}\\ \therefore \color{green}{135602}:339& = \mathbf{400}&\text{ remainder }&\mathbf{28}\end{align}$$
Simplification of the boolean expression
Is the given solution correct? No. In the second to third line you substitute $\rm ABB'$ with $\rm A$. However $\rm ABB' = 0$ . ($\rm BB'$ is a contradiction.) The rest of your working is okay.   So correct the error, try again, and you should have it.
Prove that $x_n=1+\frac{2}{4}+\frac{3}{16}+...+\frac{n}{4^{n-1}}$ converges
Use comparison: $$\sum_{k=0}^{\infty}{\frac{k}{4^{k-1}}}<\sum_{k=0}^{\infty}\frac{2^k}{4^{k-1}}=4\sum_{k=0}^{\infty}\bigg(\frac{2}{4}\bigg)^k$$
Splitting costs of a costcenter
Ok so I finally figured it out I think. +--------+--------+------------+ | From | Amount | To | +--------+--------+------------+ | Frank | 10000 | (external) | | Cassie | 1000 | (external) | | Cassie | 5000 | Facilities | | Frank | 500 | Culture | | Naomi | 5500 | Facilities | | Naomi | 5500 | Culture | +--------+--------+------------+ So if I did my math right, all I have to do is take Facility's expense (10000) split it by the population that's paying it (Naomi and Cassie). Then do the same for Culture's expense. After that I take the new additional costs of both and add that back into Naomi's payments as she needs to cover both. I apologize for the convoluted problem and explanation. This has been swimming around in my head for a while and I have been trying many different solutions so I'm sure I didn't come across very clearly, but just putting up here has helped me think through the issue more thoroughly, so thanks for the assist in that regard.
Questions on Truth-Functional Logic
All three of them are true. For the first, for any formula $\phi$, we have that $$ \bot \to \phi $$ is a tautology. To see this, look at the truth table for $\to$. For the second, for any $\phi,\psi$, even if $\phi \to \psi$ does not hold, we always have $$ \phi \land \psi \to \psi $$ is a tautology. Finally, for any $\phi,\psi,\chi$, if $\phi\to\psi$ holds, then so does $$ \phi \land \chi \to \psi. $$
Geometric realization of the product of two simplicial sets
The following counterexample is apparently due to Dowker and is taken from the appendix of Hatcher's Algebraic Topology (p. 524-5). Let $X=\bigvee_{f\in S} I_f$ be a wedge sum of continuum many $1$-simplices $I_f$, indexed by the set $S$ of all functions $\mathbb{N}\to(0,1]$. Let $Y=\bigvee_{n\in\mathbb{N}} I_n$ be a wedge sum of countably infinitely many $1$-simplices $I_n$, indexed by the natural numbers. Given $f\in S$ and $n\in\mathbb{N}$, define $p_{fn}=(f(n),f(n))\in |I_f\times I_n|\subset|X\times Y|$, where we identify $|I_f\times I_n|$ with $[0,1]^2$, where $0$ corresponds to the wedge point in both coordinates. Let $P\subset|X\times Y|$ denote the set of these points $p_{fn}=(f(n),f(n))$ for all $f$ and $n$. Then $P$ is closed in $|X\times Y|$ (its intersection with any simplex consists of at most one point). However, $P$ is not closed as a subset of $|X|\times |Y|$ with the product topology. Indeed, letting $0$ denote the wedge point of both $|X|$ and $|Y|$, I claim that $(0,0)$ is in the closure of $P$ in $|X|\times |Y|$. To prove this, let $U$ be a neighborhood of $0$ in $|X|$ and $V$ be a neighborhood of $0$ in $|Y|$. For each $n\in\mathbb{N}$, there is some $t_n\in(0,1]$ such that $V\cap |I_n|$ contains $[0,t_n)$, identifying $|I_n|$ with $[0,1]$. We may assume the sequence $(t_n)$ converges to $0$ (if it doesn't, just make the numbers $t_n$ smaller). Now define $f:\mathbb{N}\to(0,1]$ by $f(n)=t_n/2$. Then $U\cap |I_f|$ contains $[0,s)$ for some $s>0$, identifying $|I_f|$ with $[0,1]$. Now choose $n$ sufficiently large so that $f(n)<s$. We then find that the point $p_{fn}=(f(n),f(n))\in |I_f\times I_n|$ is contained in $U\times V$. That is, $U\times V$ intersects $P$. Since $U$ and $V$ were arbitrary neighborhoods of $0$, this means $(0,0)$ is in the closure of $P$ with respect to the product topology.
Behavior of a function near a singular point
You may want to use Landau notation: $$f\in\mathcal O\left(\frac1{x^2}\right) \text{ as } x\to 0.$$
L'Hospital's rule's hypothesis that the right hand limit should exist
The derivatives may behave worse than the functions themselves. Consider $f(x)=x^2\sin\frac1x$ (for $x\ne 0$) and $g(x)=x$, so $f'(x)=2x\sin \frac1x-\cos \frac1x$ and $g'(x)=1$. Now we have $$\lim_{x\to0}\frac{f(x)}{g(x)}=\lim_{x\to 0}\frac{x^2\sin\frac1x}{x}=\lim_{x\to0}x\sin\frac1x=0$$ whereas $ \frac{f'(x)}{g'(x)}=2x\sin \frac 1x-\cos\frac1x$ diverges as $x\to0$.
Why will the result of two different vector multiple the same matrix be the same?$||\mathbf H_{AB} f_{0.1}||^2 \approx ||\mathbf H_{AB} f_{0.2}||^2$
A random $5\times 6$ matrix is going almost sure to have a one dimensional kernel, i.e. the construction is equivalent to taking a random vector $\vec n_A$. Now you project two vectors on $\vec n_A$ and get two parallel vectors $\vec f_{AB}=b\vec n_A$, $\vec f_{AC}=c\vec n_A$. Normalization gives $$ \vec f_{new}=\frac{\alpha b\vec n_A+(1-\alpha)c\vec n_A}{\|\alpha b\vec n_A+(1-\alpha)c\vec n_A\|}=\frac{d\vec n_A}{\|d\vec n_A\|}=\pm\frac{\vec n_A}{\|\vec n_A\|}. $$ It does not depend on $\alpha$. P.S. It is something wrong with your normalization formula, it does not make $\|f_{new}\|=1$.
Optimal stategy in 'asking for pie' game?
Ok, so what's the Nash equilibrium? Consider the $n=2$ player case. Let's look at a fixed size of the pie $p>0$ for now. A Nash equilibrium (NE) is a strategy profile $(a_1,a_2)$ such that $(a_1,a_2)$ are best responses to each other. A symmetric Nash equilibrium is $(a_1=p/2,a_2=p/2)$. "Proof": Could any of the two players do better by switching? Then we would not have a NE, otherwise we do. Suppose player 1 switched to $a_1>p/2$ ($a_2$ stays fixed). He would still get $p/2$ (given that half goes to player 2 who now has the smaller "demand" and is served first), so no improvement. Suppose player 1 switched to $a_1<p/2$, then he would get less than $p/2$, exactly what he asked for, but this is less than in the Nash equilibrium candidate (where he gets $p/2$). So $(a_1=p/2,a_2=p/2)$ is a Nash equilibrium, i.e., $a_1$ is the optimal action given $a_2$ and $a_2$ is the optimal action given $a_1$, i.e., both play best responses to each other. Is there also an asymmetric Nash equilibrium? No. Suppose without loss of generality $a_1>a_2$ would be a Nash equilibrium. If $a_2<p/2$, then player 2 could get more by playing $\hat{a}_2=(a_1+a_2)/2$, so there is a profitable deviation and it cannot be a NE. If $a_2>p/2$, then player 1 could get more by playing $\hat{a}_1=p/2$ (which guarantees $p/2$, which is more than $\min\{0,p-a_2\}$ [since $p-a_2<p/2$] that he would otherwise get), so a profitable deviation exists and it is no NE. With the same arguments we can show more generally that, for finitely many $n\ge 2$ players, the symmetric NE strategy has to be $a_i=p/n$ for $i=1,...,n$. (No asymmetric NE exists.) I leave the case of stochastic $p$ to someone else. Note: In case you don't know, a Nash equilibrium is vaguely what you call optimal strategies for both players when both players know what the other is playing (so that there is no surprise). In a Nash equilibrium, both players play the optimal strategy given the strategy of the other player.
The control of norm in quotient algebra
To start from a very simple case: If $B_1$ is a Hilbert space and $S$ is finite-dimensional such that we have $$ \|Tv\| \le c\|v\| + \|Sv\| \quad \forall v$$ then we can find a finite-dimensional $A$ satisfying $$\tag{1} \|Av\| \le \|Sv\|$$ and $$\tag{2} \|(T-A)v\| \le c \|v\|$$ for every $v \in B_1$. Proof: Write $B_1 = (\ker S)^\bot \oplus_2 \ker S$. We define $A$ separately on each summand: On $\ker S$, we can clearly choose $A = 0$. On its annihilator, we can work with an orthonormal basis: For each vector $e_n$, we set $$ Ae_n = \frac{\|Se_n\|}{c + \|Se_n\|} Te_n$$ (note that we know $Se_n \ne 0$) which immediately gives us $$ \|Ae_n\| \le \|Se_n\|$$ and $$ \|(T-A)e_n\| = \left\|\left(1 - \frac{\|Se_n\|}{c + \|Se_n\|} \right)Te_n\right\| \le c $$ Since (1) and (2) are thus satisfied both on all of $\ker S$ and whenever $v$ is member of the orthonormal basis for $(\ker S)^\bot$, i.e. $v = e_n$, they are satisfied for every $v \in B_1$.
a probability question related to computing the variance of a specific pattern
For $i=2,\dots,n$ let $Y_i = 1$ if $X_i>X_{i-1}$ and $Y_i = 0$ otherwise, so that $S = \sum_{i=2}^{n} Y_i$. Then $\text{Var}[S] = \sum_{i=2}^{n}\sum_{j=2}^{n} \text{Cov}[Y_i,Y_j]$. What is $\text{Cov}[Y_i,Y_j]$ when $i=j$? When $i$ and $j$ differ by $1$? When $i$ and $j$ differ by more than $1$? Try filling in the proof yourself!
Determining whether a function is Piecewise Polynomial
According to the definition that I am familiar with, a function on $\mathbb R$ is piecewise $P$ (for some property $P$, e.g. continuous, differentiable, constant, linear, polynomial, ...) if its domain can be divided into a union of non-degenerate intervals such that the function is $P$ over each interval. Fix $r = 2$, so that $g_r(x) = x(-1 + 2F(x))$. Suppose $X$ follows an exponential distribution with parameter $\lambda = 1$. Then $F(x) = 1 - e^{-x}$, and $g_2(x) = x - 2xe^{-x}$. The function $g_2$ has an infinite Taylor expansion about any $x$, so it cannot be polynomial in any neighbourhood. Of course, if the distribution of $X$ is such that $F(x)$ itself is piecewise polynomial, then so is $g_r$ for any $r$. However, I'm not sure if there are any $F$ and $r$ such that $F$ is not piecewise polynomial but $g_r$ is.
prove dot product in Rn equals lengths time cosine of angle
Use the cosine rule. Consider the triangle with vertices $0$, $x$ and $y$. It has side-lengths $a=\sqrt{\langle x,x\rangle}$, $b=\sqrt{\langle y,y\rangle}$ and $c=\sqrt{\langle x-y,x-y\rangle}$. Then $$\cos\theta=\frac{a^2+b^2-c^2}{2ab}.$$ Now put in the above expressions for $a$, $b$ and $c$.
Étale cohomology of projective space
You could count points and use the Lefschetz fixed point formula.
Is a loop actually a circuit?
I'd lean towards calling it a circuit. However, when it comes to these technical and isolated cases, it's often best to simply choose whichever definition makes your work easiest to read. The distinction between defining a loop as a circuit or not should be a very minor consideration in a larger body of work. The larger body of work is what's important. As long as we state our definition unambiguously and are consistent throughout, it's not a real problem. To illustrate, if you go through my papers and look at the different ways I've defined a Latin square, you'll see that the definitions are inconsistent. I just pick whichever definition makes the paper easiest to read. I could use up the readers time in being consistent, but instead I just get to the real point of the paper. However, in some cases these slight distinctions can matter. For example, Counting loopy graphs with given degrees by McKay and Greenhill, the distinction between loops contributing 1 or 2 to the degree of a vertex made a non-negligible difference in the case of asymptotic enumeration of graphs with a given degree sequence. The author's responded to this situation by analysing both cases.
How do I draw a diagram for a function space?
I am not sure it is possible to visualize function spaces in the way that you can visualize $\mathbb{R}^2$, but here is one attempt to see Arzela-Ascoli : Consider the usual $\epsilon-\delta$ picture of a limit (See this, for instance).Now, equicontinuity of a family $S = \{f_{\alpha}\}$ says that, for any such horizonal $\epsilon-$strip, there is a vertical $\delta$-strip such that, for any $x$ within that $\delta$-strip, the corresponding values $\{f_{\alpha}(x)\}$ are all in that horizontal $\epsilon$-strip. Suppose $f:[0,1] \to [0,1]$ is continuous. Visualize an open ball $B(f,r)$ around a function $f\in C[0,1]$ as a band of width $r/2$ on either side of the curve (ie. It would be bounded by the curves $x \mapsto f(x) + r/2$ and $x \mapsto f(x) - r/2$) (See this, for instance) Now suppose $S := \{f_{\alpha} :[0,1]\to [0,1]\}$ is equicontinuous. You want to know why it is compact. So start with infinitely many bands $B(f_{\alpha}, \epsilon_{\alpha})$. Now fix $x \in [0,1]$ and look at all the bands "above" it. Since the bands above it do not go beyond the largest $Y$-value (ie. 1), there are only finitely many bands that are needed to cover $\{f_{\alpha}(x)\}$. Each of these finitely many bands contribute an $\epsilon_{\alpha}$, of which you can take the minimum - call that $\epsilon_x$. Now there are finitely many horizontal strips of radius $\epsilon_x$ that together cover all the $Y-$values, $\{f_{\alpha}(x)\}$. Choose a vertical $\delta_x$-strip as in (1). And note that, for any $z \in (x-\delta_x,x+\delta_x)$, the corresponding $Y-$values $\{f_{\alpha}(z)\}$ are all in one of those finitely many horizontal $\epsilon_x$-strips. Now is where the compactness of the domain comes in. There are only finitely many such $\delta_x$ that are needed to cover all of $[0,1]$. Each $\delta_x$ needs only finitely many $B(f_{\alpha},\epsilon_{\alpha})$; which gives compactness. I have a pretty good picture in my mind right now, and I hope this explanation was able to translate that into words. Unfortunately, I am not sure how to represent this graphically. If someone can suggest a nice way to do this, then that would help greatly.
How do generators of a group work?
$(12)$ is a transposition. $(12)^{-1}=(12)$, that is, it's its own inverse. All that can be gotten by taking powers of $(12)$ is $(12)$ and $e$, the identity. (Note: In general, $\langle a\rangle =\{a^n:n\in\mathbb Z\}$). Thus $\langle (12)\rangle =\{(12),e\}$, a two element group. Since $\mid S_3\mid=6$, we get $[S_3:\langle (12)\rangle] =3$.
How would you integrate $\int_{0}^1 \frac{1-\cos x}{x^2}\,dx$
I don't think you can express this integral in terms of elementary functions. Otherwise you would have found a way to express the sin integral in terms of elementary functions. We have: $$ \int_{0}^1\frac{1-\cos x}{x^2} $$ We can integrate by parts. Let $u = 1-\cos x$ and $v'=\frac{1}{x^2}$ giving $u' = \sin x$ and $v = \frac{-1}{x}$. Thus we have: $$ \begin{align} \int_{0}^1\frac{1-\cos x}{x^2} &= \left .\frac{\cos x-1}{x}\ \right \rvert_{0}^1 + \int_{0}^1\frac{\sin x}{x} \\ \end{align} $$ This leaves us with finding the integral of $\sin x \over x$. This is a known non-elementary function called (creatively) the sine integral, $\text{Si}(x)$, and is defined as: $$ \text{Si}(x) = \int_0^x\frac{\sin t}{t}dt $$ Thus we have: $$ \begin{align} \int_{0}^1\frac{1-\cos x}{x^2} &= \left .\frac{\cos x-1}{x}\right \rvert_{0}^1 + \text{Si}(1) \\ &= \cos(1) - 1 + \text{Si}(1) \\ &\approx 0.486385 \end{align} $$ Note: the left term can be evaluated at $x=0$ by taking limits of the term to 0. I got the value of $\text{Si}(1)$ using the taylor expansion. Alternatively you could have started with the taylor expansion of the original equation but it would have inadvertently led you to the taylor expansion of $\text{Si}(1)$ in some form.
Is it against the definition of polygon for edges or vertices to overlap or being the same point or segment
Here is a picture from Wikipedia showing some polygons. Note that three of them self intersect at a point (more than once for the regular star). Now, this diagram is not rigid at all when it comes to naming polygons, but I would say most definitions would agree that they are all polygons.... it is entirely dependent on your definition though. It is assumed that if you construct a polygon as you do in your post that you allow for this to occur. It is more a case by case basis... in some cases it is useful to break up an intersecting polygon into multiple non-intersecting polygons, and in some cases it is better to leave the polygon as one unit.
Open set and boundary points
Some hints: It's easier to show that $A$ is the preimage of an open set in $\mathbb{R}$. As $A$ is open, $a \in \partial A$ is not an element in $A$, thus $f(a) \leq 0$. By continuity, we know that for every $\epsilon > 0$ there is a $\delta > 0$, such that for every $x \in B(a, \delta)$, we have $|f(x)| < \epsilon$. But we also know that there is some $x \in B(a,\delta) \cap A$, so we get $0 < f(x) < \epsilon$. By this we may obtain a sequence $(x_n) \subset A$, converging to $a$, such that $f(x_n) = \frac1n$. By continuity again we see that $f(a) = f(\lim x_n) = \lim f(x_n) = \lim \frac1n = 0$. Try to find an example for which $a$ is in the boundary of the set $B := \left\{ x \in M \mid f(x) < 0 \right\}$, but not in $\partial A$. Then by similar arguments as for 1. and 2., $B$ is open, and $f(a) = 0$, but $a \notin A$.
A complex contour integral
For the people who'll find this in about ten years or so, the integral is solved by expanding the integrand of the hint using Euler's formula and trigonometry. Then the integral in question should equal the real part, and you've got your solution using the definition of $ \sinh{2} $.
Determining probability of rain based on total probability in given week
It is like finding the number of binary numbers with seven digits, four of them being ones. As you have guessed, it is a case where the convenient model is Binomial distribution $B(n,p)=B(7,p)$ in which you are interest by the event where the random variable "number of rainy days = X" is equal to 4. We know that, with this model: $$\tag{1}P(X=4)=\binom{7}{4}p^7(1-p)^4,$$ but with which $p$ ? Here is where we disagree. I don't have the same $p$ as you: For me, the value of $p$ is determined by constraint: $$\text{Probability of at least a rainy day in the week} \ = 1-\underbrace{(1-p)^7}_{\text{not a single rainy day}}=2/7$$ $$\iff p=1-(5/7)^{1/7}=0.0469 \ \ \text{(not very simple !)}$$ It remains to plug this value of $p$ into (1) to obtain the result.
General Solution by Method of Elimination
I don't see that you have a problem. Your first solution is: $$\begin{align} & y(t)=c_1e^t+c_2e^{-t}+c_3e^{-2t}-t-1 \\ & x(t)=c_1e^t-c_2e^{-t}-\frac45c_3e^{-2t}+2t-2\end{align}$$ And your second is: $$\begin{align} & x(t)=d_1e^t+d_2e^{-t}+d_3e^{-2t}+2t-2 \\ & y(t)=d_1e^t-d_2e^{-t}-\frac54d_3e^{-2t}-t-1\end{align}$$ And they are identical given $d_1=c_1$, $d_2=-c_2$, and $d_3=-\frac45c_3$.
A draft to know how much difficult is to calculate the asymptotic $\int_1^x \frac{\sum_{n\leq t}G_n}{t^2}dt$, where $G_n$ are the Gregory coefficients
Since $$\sum_{n\geq 1}G_n z^n = \frac{z}{\log(1+z)}-1\tag{1}$$ we also have $$ \sum_{n=1}z^{n-1}\sum_{m=1}^{n}G_m = \frac{1}{(1-z)\log(1+z)}-\frac{1}{z(1-z)}\tag{2} $$ and by multiplying both sides by $-\log(z)$ and performing $\int_{0}^{1}(\ldots)\,dz $ we get: $$\begin{eqnarray*} \sum_{n=1}\frac{1}{n^2}\sum_{m=1}^{n}G_m &=& \int_{0}^{1}\frac{\log(z)}{1-z}\left(\frac{1}{z}-\frac{1}{\log(1+z)}\right)\,dz\\(\text{Schröder's})&=&\int_{0}^{+\infty}\frac{\zeta(2)-\text{Li}_2\left(-\frac{1}{1+x}\right)}{(x+2)\left(\pi^2+\log^2 x\right)}\,dx\\&\approx& 0.78027750661.\tag{3}\end{eqnarray*} $$ Many series involving Gregory coefficients can be directly computed from the generating function $(1)$. For instance: $$ \sum_{n\geq 1}\frac{G_n}{n} = \int_{0}^{1}\left(\frac{1}{\log(1+z)}-\frac{1}{z}\right)\,dz = -\gamma+\text{li}(2)\tag{4} $$ where $\gamma$ is the Euler-Mascheroni constant and $\text{li}(a)=\int_{0}^{a}\frac{dt}{\log t}$ is the logarithmic integral.
Tangent space definition
Suppose $U$ and $V$ are two neighborhoods of $p$, $f$ is defined on $U$ and $g$ is defined on $V$. How do you define $f+g$? (You do need at least an abelian group structure if you want to define derivations.) What you suggested in the comments is the logical solution: take $f+g$ defined on $U\cap V$. But then if I define $f_2$ as the restriction of $f$ on $U\cap V$ (which is a different funtion from $f$!), then $f_2+g$ is also $f+g$ on $U\cap V$. So $f_2+g=f+g$, and $f_2=f$. Ooops. The solution to that is to say that for our purposes, $f$ and $f_2$ should actually be considered the same function, because we are only interested in what happens near $p$. We don't care if $f$ is actually defined on a bigger neighborhood. But this is exactly the definition of germs: we identify functions that coincide on some smaller neighborhood. You see that this is necessary at the very least for the sum of functions to be well-defined.
Simplifying $ \frac{2x-x\lvert x-1\rvert+x\lvert x\rvert+5}{\lvert x\rvert+1}$
Hints When dealing with something like |r|, you want to consider two cases: $r < 0$ and $r \geq 0.$ Here you have 3 conditions to worry about. Is $(x - 1) \geq 0$ or is $(x - 1) < 0.$ Is $(x) \geq 0$ or is $(x) < 0.$ Is $|x| + 1 = 0.$ If possible, this would make the denominator $= 0,$ which is not allowed. However, this situation is impossible because $|x|$ can not equal $-1$. Try to "collapse" the cases that represent points 1 and 2 above into 3 distinct intervals. Then, for each interval, create a totally distinct function that applies only to that interval. I overlooked that the OP has shown work and that therefore, it is okay to complete the problem. The original function is $$ f(x) = \frac{2x-x\lvert x-1\rvert+x\lvert x\rvert+5}{\lvert x\rvert+1}$$ The three intervals will be Interval 1: $x < 0.~$ The specific function will be $f_1(x).$ $$ f_1(x) = \frac{2x ~- ~[(x)(1 - x)] ~+~ [x(-x)] ~+~ 5}{(-x) ~+ ~1}.$$ Interval 2: $0 \leq x < 1.~$ The specific function will be $f_2(x).$ $$ f_2(x) = \frac{2x ~- ~[(x)(1 - x)] ~+~ [x(x)] ~+~ 5}{(x) ~+ ~1}.$$ Interval 3: $1 \leq x.~$ The specific function will be $f_3(x).$ $$ f_3(x) = \frac{2x ~- ~[(x)(x - 1)] ~+~ [x(x)] ~+~ 5}{(x) ~+ ~1}.$$ At this point, none of $~f_1(x), f_2(x),~$ or $~f_3(x)$ are employing any absolute value signs. Now, you can consider whether there is any simplification that is common to all three functions $~f_1(x), f_2(x),~$ and $~f_3(x).$ From my perspective, other than multiplying the numerators out, I see no other possible simplification that can be applied to all three functions.
Linearizing a 2nd order ODE
The reference answer is wrong, your version is correct, $y+y^2$ linearizes to $Δy+2yΔy$, so that at $y=0$ only $Δy$ remains.
Sequence of continuous function that converges pointwise
Define the functions $f_n$ by $$\forall |x|>\frac1n, f_n(x) = 0$$ $$\forall |x| < \frac1n, f_n(x) = (1 - nx)(1 + nx)$$ Then $f_n$ is continuous, but $f = \lim_{n\to\infty} f_n$ is not, as $f(x) = 0$ for all $x \ne 0$, and $f(0) = 1$.
Compute the expected value of the maximum of three exponentially distributed random variables.
For each $t>0$, $$ \left\{\max(Y_1+Y_2,Y_1+Y_3,Y_4)\leqslant t\right\} = \left\{\max(Y_1+Y_2)\leqslant t\right\}\cap\left\{Y_1+Y_3\leqslant t\right\}\cap \left\{Y_4\leqslant t\right\}, $$ and $Y_1+Y_2$ has density given by convolution \begin{align} f_{Y_1+Y_2}(t) &= f_{Y_1}\star f_{Y_2}(t)\\ &= \int_{\mathbb R}f_{Y_1}(\tau)f_{Y_2}(t-\tau)\ \mathsf d\tau\\ &= \int_0^t \lambda_1 e^{-\lambda_1\tau}\lambda_2 e^{-\lambda 2(t-\tau)}\ \mathsf d\tau\\ &= \begin{cases} \frac{\lambda _1 \lambda _2 }{\lambda _2-\lambda _1}\left(e^{-\lambda _1 t}-e^{-\lambda _2 t}\right),& \lambda_1\ne \lambda_2\\ \lambda_1(\lambda_1 t)e^{-\lambda_1 t},& \lambda_1=\lambda_2. \end{cases} \end{align} By symmetry $Y_1+Y_3$ has the same distribution as $Y_1+Y_2$. Let $Z =\max(Y_1+Y_2,Y_1+Y_3,Y_4)$, then for each $t>0$, we have by independence \begin{align} \mathbb P(Z\leqslant t) &= \mathbb P\left(\left\{Y_1+Y_2\leqslant t\right\} \cap \left\{Y_1+Y_3\leqslant t\right\} \cap\left\{Y_4\leqslant t\right\} \right)\\ &= \mathbb P(Y_1+Y_2\leqslant t)\mathbb P(Y_1+Y_3\leqslant t)\mathbb P(Y_4\leqslant t)\\ &=\begin{cases} \int_0^t\frac{\lambda _1 \lambda _2 }{\lambda _2-\lambda _1}\left(e^{-\lambda _1 s}-e^{-\lambda _2 s}\right)\ \mathsf ds\cdot\int_0^t\frac{\lambda _1 \lambda _3 }{\lambda _3-\lambda _1}\left(e^{-\lambda _1 s}-e^{-\lambda _3 s}\right)\ \mathsf ds\cdot(1-e^{-\lambda_4 t}),& \lambda_1\ne\lambda_2, \lambda_1\ne \lambda_3\\ \int_0^t \lambda_1(\lambda_1 s)e^{-\lambda_1 s}\ \mathsf ds\cdot \int_0^t\frac{\lambda _1 \lambda _3 }{\lambda _3-\lambda _1}\left(e^{-\lambda _1 s}-e^{-\lambda _3 s}\right)\ \mathsf ds\cdot(1-e^{-\lambda_4 t})& \lambda_1=\lambda_2, \lambda_1\ne \lambda_3\\ \int_0^t\frac{\lambda _1 \lambda _2 }{\lambda _2-\lambda _1}\left(e^{-\lambda _1 s}-e^{-\lambda _2 s}\right)\ \mathsf ds,\cdot \int_0^t\lambda_1(\lambda_1 s)e^{-\lambda_1 s}\ \mathsf dt\cdot(1-e^{-\lambda_4 t})& \lambda_1\ne\lambda_2, \lambda_1= \lambda_3\\ (\int_0^t\lambda_1(\lambda_1 s)e^{-\lambda_1 s}\ \mathsf dt)^2\cdot(1-e^{-\lambda_4 t}) ,& \lambda_1=\lambda_2, \lambda_1= \lambda_3\\ \end{cases}\\ &=\begin{cases} \frac{\lambda _1^2 \lambda _2 \lambda _3 \left(e^{-\lambda _1 t}-e^{-\lambda _2 t}\right) \left(e^{-\lambda _1 t}-e^{-\lambda _3 t}\right) \left(1-e^{-\lambda _4 t}\right)}{\left(\lambda _1-\lambda _2\right) \left(\lambda _1-\lambda _3\right)},& \lambda_1\ne \lambda_2, \lambda_2\ne \lambda_3\\ -\frac{\lambda _1 \lambda _3 \left(e^{-\lambda _1 t}-e^{-\lambda _3 t}\right) \left(1-e^{-\lambda _4 t}\right) \left(1-e^{-\lambda _1 t} \left(\lambda _1 t+1\right)\right)}{\lambda _1-\lambda _3},& \lambda_1=\lambda_2, \lambda_2\ne\lambda_3\\ \left(1-e^{-\lambda _4 t}\right) \left(1-e^{-\lambda _1 t} \left(\lambda _1 t+1\right)\right){}^2,& \lambda_1 = \lambda_2, \lambda_2=\lambda_3. \end{cases} \end{align} For any nonnegative random variable $X$, the mean of $X$ is given by the integral of its survivor function. It follows that \begin{align} \mathbb E[Z] &= \begin{cases} \frac{2 \left(\frac{2 \lambda _2 \lambda _3}{\lambda _4 \left(\lambda _1+\lambda _4\right) \left(2 \lambda _1+\lambda _4\right)}+\frac{1}{\lambda _2}-\frac{1}{\lambda _2+\lambda _3}+\frac{1}{\lambda _4}-\frac{1}{\lambda _2+\lambda _4}-\frac{1}{\lambda _3+\lambda _4}+\frac{1}{\lambda _2+\lambda _3+\lambda _4}+\frac{1}{\lambda _3}\right) \lambda _1^2+2 \left(\left(-\frac{1}{\lambda _4}+\frac{1}{\lambda _1+\lambda _4}+\frac{1}{\lambda _2+\lambda _4}-\frac{1}{\lambda _1+\lambda _2+\lambda _4}+\frac{1}{\lambda _1+\lambda _2}\right) \lambda _3-\frac{\lambda _3}{\lambda _2}+\lambda _2 \left(\frac{1}{\lambda _1+\lambda _3}-\frac{1}{\lambda _4}+\frac{1}{\lambda _1+\lambda _4}+\frac{1}{\lambda _3+\lambda _4}-\frac{1}{\lambda _1+\lambda _3+\lambda _4}-\frac{1}{\lambda _3}\right)\right) \lambda _1-2 \left(\lambda _2+\lambda _3\right)+\frac{3 \lambda _2 \lambda _3}{\lambda _1}}{2 \left(\lambda _1-\lambda _2\right) \left(\lambda _1-\lambda _3\right)},& \lambda_1\ne\lambda_2, \lambda_2\ne\lambda_3\\ \frac{\left(-\frac{8 \lambda _3}{\lambda _4 \left(\lambda _1+\lambda _4\right) \left(2 \lambda _1+\lambda _4\right)}-\frac{4}{\left(\lambda _1+\lambda _3\right){}^2}-\frac{4}{\left(\lambda _1+\lambda _4\right){}^2}+\frac{4}{\left(\lambda _1+\lambda _3+\lambda _4\right){}^2}\right) \lambda _1^2+4 \left(\lambda _3 \left(\frac{1}{\left(\lambda _1+\lambda _4\right){}^2}-\frac{1}{\left(2 \lambda _1+\lambda _4\right){}^2}\right)+\frac{1}{\lambda _3}-\frac{1}{\lambda _1+\lambda _3}+\frac{1}{\lambda _4}-\frac{1}{\lambda _1+\lambda _4}-\frac{1}{\lambda _3+\lambda _4}+\frac{1}{\lambda _1+\lambda _3+\lambda _4}\right) \lambda _1-\frac{9 \lambda _3}{\lambda _1}+8}{4 \left(\lambda _1-\lambda _3\right)} ,& \lambda_1=\lambda_2,\lambda_2\ne\lambda_3\\ \frac{2 \lambda _1^2}{\left(2 \lambda _1+\lambda _4\right){}^3}+2 \left(\frac{1}{\left(2 \lambda _1+\lambda _4\right){}^2}-\frac{1}{\left(\lambda _1+\lambda _4\right){}^2}\right) \lambda _1+\frac{11}{4 \lambda _1}+\frac{1}{\lambda _4}-\frac{2}{\lambda _1+\lambda _4}+\frac{1}{2 \lambda _1+\lambda _4},& \lambda_1=\lambda_2, \lambda_2=\lambda_3 \end{cases} \end{align} The most interesting case is there the $\lambda_i$ are distinct, in which the expectation of $Z$ is $$ \tiny\dfrac{2 \left(\dfrac{2 \lambda _2 \lambda _3}{\lambda _4 \left(\lambda _1+\lambda _4\right) \left(2 \lambda _1+\lambda _4\right)}+\dfrac{1}{\lambda _2}-\dfrac{1}{\lambda _2+\lambda _3}+\dfrac{1}{\lambda _4}-\dfrac{1}{\lambda _2+\lambda _4}-\dfrac{1}{\lambda _3+\lambda _4}+\dfrac{1}{\lambda _2+\lambda _3+\lambda _4}+\dfrac{1}{\lambda _3}\right) \lambda _1^2+2 \left(\left(-\dfrac{1}{\lambda _4}+\dfrac{1}{\lambda _1+\lambda _4}+\dfrac{1}{\lambda _2+\lambda _4}-\dfrac{1}{\lambda _1+\lambda _2+\lambda _4}+\dfrac{1}{\lambda _1+\lambda _2}\right) \lambda _3-\dfrac{\lambda _3}{\lambda _2}+\lambda _2 \left(\dfrac{1}{\lambda _1+\lambda _3}-\dfrac{1}{\lambda _4}+\dfrac{1}{\lambda _1+\lambda _4}+\dfrac{1}{\lambda _3+\lambda _4}-\dfrac{1}{\lambda _1+\lambda _3+\lambda _4}-\dfrac{1}{\lambda _3}\right)\right) \lambda _1-2 \left(\lambda _2+\lambda _3\right)+\dfrac{3 \lambda _2 \lambda _3}{\lambda _1}}{2 \left(\lambda _1-\lambda _2\right) \left(\lambda _1-\lambda _3\right)}. $$
Reference to the continued fractions of the form $\mathop{\text{K}}_{n=1}^{\infty}\frac{an+b}{cn+d}$
An easier-to-read form is obtained if one starts at $n=0$ instead of $n=1$: $${\raise{-1ex}\mathop{\huge\text{K}}_{n=0}^{\infty}}\frac{an+b}{cn+d}=\frac{a}{c}\frac{{_1F_1}'(\alpha;\beta;\gamma)}{_1F_1(\alpha;\beta;\gamma)}=\frac{a}{c}\frac{\alpha}{\beta}\frac{_1F_1(\alpha+1;\beta+1;\gamma)}{_1F_1(\alpha;\beta;\gamma)},\\\alpha:=\frac{b}{a},\quad\beta:=\frac{a}{c^2}+\frac{d}{c},\quad\gamma:=\frac{a}{c^2}.\quad\color{LightGray}{\left[\frac{a}{c}\frac{\alpha}{\beta}=\cfrac{b}{d+\cfrac{a}{c}}\right]}$$ It can be deduced from one of the recurrences for $_1F_1$, and there are other approaches (I did it myself some time ago, via ODEs for exponential generating functions for the convergents of the CF, which transform into the Kummer's ODE for $_1F_1$ after a linear change of variable). In fact the closely related form $$\frac{_1F_1(a+1;b+1;z)}{_1F_1(a;b;z)}=\frac{b}{b-z}{\vphantom{1}\atop+}\frac{a+1}{b+1-z}{\vphantom{1}\atop+}\frac{a+2}{b+2-z}{\vphantom{1}\atop{+\ldots}}$$ does appear in literature (say, in Jones & Thron section $7.3.3$). It may well be known since Kummer, and certainly known to Nørlund who found an $_2F_1$ analogue (here is the idea, the "ODE for EGF" way).
(Proof-check) Alternative formula for the total variation
Set $S = \sup_\mathcal{P} \sum_\mathcal{P} |f(t_i)-f(t_{i-1})|$ and let $\epsilon > 0$. By definition of the supremum you may find a partition $\mathcal{P}$ so that $$S - \sum_\mathcal{P} |f(t_i)-f(t_{i-1})| < \epsilon$$ Now if you take any finer partition $\mathcal{P}' \supset \mathcal{P}$, then $$S - \sum_{\mathcal{P}'} |f(t_i')-f(t_{i-1}')| < \epsilon \qquad (\star)$$ Now note, that $I = \lim\limits_{\|P\| \to 0} \sum |f(t_i)-f(t_{i-1})|$ exists and is unique (see https://math.stackexchange.com/a/2047959/72031) and by $(\star)$ we obtain, $S < I + \epsilon$ for all $\epsilon >0$ and thus $S \leq I$ as wanted.
Proof of a corollary of the Banach Fixed Point Theorem
The proof appears to be correct, although you could simplify it: there's no need to write it as a proof by contradiction.
Solve a diophantine equation
Probably not of much use but some elementary observations about the numbers $m^4+m+1$ are as follows. There is precisely one solution modulo $3^n$ of the equation $$m^4+m+1\equiv 0\pmod {3^n}$$ The solution for $3^{n+1}$ can be obtained from the solution for $3^n$ as follows:- If $m^4+m+1\equiv 3^nd\pmod {3^{n+1}}$, where $d=1$ or $2$, then $$(m+3^nd)^4+(m+3^nd)+1\equiv 0\pmod {3^{n+1}}$$ The first few solutions are then $m=1:$ $m^4+m+1\equiv 3\pmod {3^2}$ $m=1+3=4:$ $m^4+m+1\equiv 2\times 3^2\pmod {3^3}$ $m=4+2\times 3^2=22:$ $m^4+m+1\equiv 3^3\pmod {3^4}$ $m=22+3^3=49:$ $m^4+m+1\equiv 2\times 3^4\pmod {3^5}$ $m=49+2\times3^4=211:$ $m^4+m+1\equiv 3^7\pmod {3^8}$ $m=211+3^7=2398:$ $m^4+m+1\equiv 2\times 3^8\pmod {3^9}$
Smith normal form help
You are correct that the decomposition of $G$ is $\mathbb{Z}_2 \oplus \mathbb{Z}_4 \oplus \mathbb{Z}$. If the computer is telling you that the answer is $\mathbb{Z}_2 \oplus \mathbb{Z}_4 \oplus \mathbb{Z} \oplus \mathbb{Z}$, the most likely explanation is that the matrix for $G$ was somehow transposed at some point during your calculation. For example, if you originally entered the transpose of the original matrix for $G$, then you would get the transpose of the Smith normal form. Note that there isn't an established convention for which direction a matrix presentation for an abelian group should go. In some books (and in some computer programs), the rows correspond to generators and the columns correspond to relations, while in other books (and computer programs) this convention is reversed.
If $p(x)\leq 1$ for $x\in [-1,1]$ find the greatest value of $\int_{-1}^{1}p(x)dx$
Since $f(0) = 1$, if $f$ does not exceed $1$ then $0$ must be a local maximum. Therefore $f'(0) = 0$ and $f''(0) \le 0$. This implies $b=0$ and $a \ge -1$. By your calculation, the integral is maximised with $a$ as negative as possible. Therefore $a=-1$.
In an additive category, why is finite products the same as finite coproducts?
Note that a product $A \times B$ is the same as a pull-back diagram A x B -> B | | v v A ---> 0 with maps $p:A \times B \to A$ and $q: A \times B \to B$. In particular there is a map $i: A \to A \times B$ such that $pi = 1_{A}$ and $qi = 0$ as well as a map $j: B \to A \times B$ such that $qj = 1_{B}$ and $pj = 0$. Now note that $p(ip + jq) = pip + 0 = p$ and $q(ip + jq) = q$ so that $ip + jq = 1_{A \times B}$. Let us check that $i: A \to A \times B$ and $j: B \to A \times B$ define a coproduct. Given $f: A \to D$ and $g: B \to D$ we get a map $d: A \times B \to D$ by setting $d = fp + gq$. Since $di = (fp + gq)i = fpi +gqi = f$ and $dj = g$ it remains to prove uniqueness of $d$. But this is clear as any other such map will satisfy $(d-d')1_{A \times B} = (d-d')(ip+jq) = 0$. Summing up, we have proved that the product of two objects is also a coproduct. Now if we want to say that finite products and coproducts exist and coincide, we need a zero object, since otherwise the empty product (the terminal object) and the empty coproduct (the initial object) would not coincide.
proof that a function is decreasing
First, thank you for the definition, David. Now, back to the point on hand. Try looking at factors you can multiply this expression with to simplify it like let's say $\sqrt{3+x^{1/3}}-2$ so that it is sufficient to show that $$\frac{(\sqrt{3+x^{1/3}}+2)(\sqrt{3+x^{1/3}}-2)}{x-1}<\frac{(\sqrt{3+y^{1/3}}+2)(\sqrt{3+y^{1/3}}-2)}{y-1} \forall\; x>y.$$ Maybe you can simplify this further?
If $f$ is an entire function with $|\,f(z)|\le|\operatorname{Re}(z)|$, then $\,f\equiv 0$.
The hypothesis $|f(z)| \le |\mathrm{Re}(z)|$ implies that $f(iy) = 0$ for all $y$. This gives you $f(0) = 0$ and $$f'(0) = \lim_{y \to 0} \frac{f(iy) - f(0)}{iy} = 0.$$
Inequality for convex function say $f$ with $L$-Lipschitz continuous gradient: $( x - y)^T \left( \alpha \nabla f(x) - \beta \nabla f(y)\right)$?
I am not sure whether this is the tightest bound which you can achieve. But here is my attempt for part 1: Let's take the case $\alpha,\beta >0$ and $\alpha \geq \beta$ \begin{align} ( x - y)^T \left( { \alpha} \nabla f(x) - {\beta} \nabla f(y)\right) &= \frac{(\alpha + \beta)}{2}(x - y)^T(\nabla f(x) - \nabla f(y)) \ + \frac{(\alpha - \beta)}{2}(x-y)^T(\nabla f(x) + \nabla f(y)) \\ &\leq \frac{(\alpha+\beta)}{2}L||x-y||^2 + \frac{(\alpha - \beta)}{2}(x-y)^T(\nabla f(x) + \nabla f(y)) \end{align} The second term can grow unbounded in general case unless $f$ is Lipschitz continous. Assuming $f$ is $G$-Lipshitz continuous the bound becomes then we have $$ |\nabla f(x)| \leq G $$ Then by Cauchy-Scwarz inequality we have: \begin{align} ( x - y)^T \left( { \alpha} \nabla f(x) - {\beta} \nabla f(y)\right) &\leq \frac{(\alpha+\beta)}{2}L||x-y||^2 + \frac{(\alpha - \beta)}{2}*(2G||x-y||) \end{align}
Asymptotic correlation between sample mean and sample median
Obtain this paper, written by T.S. Ferguson, a professor at UCLA (his page is here). It derives the joint asymptotic distribution for the sample mean and sample median. To be specific, let $\hat X_n$ be the sample mean and $\mu$ the population mean, $Y_n$ be the sample median and $\mathbb v$ the population median. Let $f()$ be the probability density of the random variables involved ($X$) Let $\sigma^2$ be the variance. Then Ferguson proves that $$\sqrt n\Big [\left (\begin{matrix} \hat X_n \\ Y_n \end{matrix}\right) - \left (\begin{matrix} \mu \\ \mathbb v \end{matrix}\right)\Big ] \rightarrow_{\mathbf L}\; N\Big [\left (\begin{matrix} 0 \\ 0 \end{matrix}\right) , \Sigma \Big]$$ $$ \Sigma = \left (\begin{matrix} \sigma^2 & E\left(|X-\mathbb v|\right)\left[2f(\mathbb v)\right]^{-1} \\ E\left(|X-\mathbb v|\right)\left[2f(\mathbb v)\right]^{-1} & \left[2f(\mathbb v)\right]^{-2} \end{matrix}\right)$$ Then the asymptotic correlation of this centered and normalized quantity is (abusing notation as usual) $$\rho_{A}(\hat X_n,\, Y_n) = \frac {\text {Cov} (\hat X_n,\, Y_n)}{\sqrt {\text{Var}(\hat X_n)\text{Var}(Y_n)}} = \frac {E\left(|X-\mathbb v|\right)\left[2f(\mathbb v)\right]^{-1}}{\sigma\left[2f(\mathbb v)\right]^{-1}} = \frac {E\left(|X-\mathbb v|\right)}{\sigma}$$ In your case, $\sigma = 1$ so we end up with $$\rho_{A}(\hat X_n,\, Y_n) = E\left(|X-\mathbb v|\right)$$ In your case, the population follows the normal with unitary variance, so the random variable $Z= X-\mathbb v$ is $N(0,1)$. Then its absolute value follows the (standard) half normal distribution, whose expected value is $$ E(|Z|) =\sigma\sqrt {\frac{2}{\pi}} = \sqrt {\frac{2}{\pi}}$$ since here $\sigma =1$. So $$\rho_{A}(\hat X_n,\, Y_n) = \sqrt {\frac{2}{\pi}}$$ Added note: It can be seen that the result does not depend on $\sigma =1$ since $\sigma$ cancels out from nominator and denominator.
Why does $f(a) = 0$ when proving the mean value theorem?
The variable in the function $g(x)$ is $x$ and the function defined by $h(x)=f(a)$ is a constant thus $h'(x)=0$.
Integer Sides of a right angle
By the formula for generating primitive Pythagorean triples $(a,b,c)$ we need to find all integers $m,n$ such that $$ a=2009^{12}=m^2-n^2,\; b=2mn,\; c=m^2+n^2. $$ So indeed, we need to count all possible different factorizations of $7^{24}41^{12}$ as $(m-n)(m+n)$. Hint for your question: have a look at this sequence. So the number is $$ \frac{(2\cdot 24+1)(2\cdot 12+1)-1}{2}. $$
Sampling theorem condition
Let $x(t) = \sin(2 \pi f_0 t )$ then $X(j\omega) = \frac{\pi}{j}[\delta(\omega - 2 \pi f_0) - \delta(\omega + 2 \pi f_0)]$. So for satisfying sampling theorem condition we should have $$\omega_s \gt 2\times2\pi f_0$$ Where $\omega_s = \frac{2\pi}{T}$ is sampling frequency. Note that it should be $\gt$ instead of $\ge$ . See here for the reason.
Leibniz rule X Direct integral computation
The problem lies in the fact that the function $g(x) = (t - x)^{a - 1}$ is not defined at $x = t$. You are treating it as if $g(t) = (t - t)^{a - 1} = 0$, but this does not actually make sense; the graph of $g$ has vertical asymptote as $x$ approaches $t$ from the left. If you treat your problem carefully by instead writing $\int_0^{t - \delta} (t - x)^{a - 1} \, dx$, you can get the proper calculation.
Proof that Axiom of Dependent Choices is equivalent to a statement about posets
The thing to remember about these kind of proofs is that the failure of the choice principle usually means that there is some object satisfying the hypothesis, but not the conclusion. From that object we should find a counterexample to the equivalence statement (in this case about well-founded posets). The failure of the version of $\sf DC$ given here means that there is some non-empty set $A$ and a relation $R$ whose domain is $A$; but there is no function from $\Bbb N$ to $A$ which gives us an $R$-sequene. But now that by [a choice-free] induction we can still show that there are arbitrarily long finite $R$-sequences. If $a_0,\ldots,a_n$ is an $R$-sequence (namely $a_i\mathrel{R}a_{i+1}$), then by the fact $a_n$ is in the domain of $R$, there is some $b\in A$ such that $a_n\mathrel{R}b$. Pick such $b$ to be $a_{n+1}$, and this shows that if there are sequences of length $n$, there are sequences of length $n+1$. Of course, moving from the above induction to a full-blown $\Bbb N$-long sequence is exactly what $\sf DC$ guarantees. And we assumed that $R$ was a counterexample to that! So there is no such sequence. Therefore, look at the collections $T$ of finite $R$-sequences, ordered by reverse end-extension (namely, $\vec a\leq_T\vec b$ if $\vec b$ is an initial segment of $\vec a$). And show that $T$ is indeed a counterexample to the equivalence about posets.
multiple approaches/ways to prove that $1000^N - 1$ cannot be a divisor of $1978^N - 1$
Hint $\ $ Examining their factorizations for small $\rm\,N\,$ shows that the power of $3$ dividing the former exceeds that of the latter (by $2),$ so the former cannot divide the latter. It suffices to prove by induction that this pattern persists (which requires only simple number theory).
Markov inequality with coin flips
I think I understand now. What we have is the following: $$P(X \geq 134) = \frac{N*P(\text{heads})}{134}$$ $$0.8 = \frac{N*0.5}{134}$$ $$N = \frac{0.8*134}{0.5}$$
How can we factor out the maximum value of f'(x) in an integral with an absolute value?
The mean value theorem. Splitting up the integral into $\int_{x_{k-1}}^{x^*_k} + \int_{x^*_k}^{x_k}$, doing it, and then using the fact that $x_k^*$ is between $x_{k-1}$ and $x_k$ so $(x_k-x^*_k)^2 \le (x_k-x_{k-1})^2$ and similarly for $(x_{k-1}-x^*_k)^2$
Expectation of sum of RVs: what if there's conditional RV inside the sum?
The second one is correct and you have justified it using the law of total expectations. The first one can't be correct. Note that $E[Z_2|Z_1]$ is a function of $Z_1$ and also, it is possible that $Z_1$ never take value of $E[Z_1]$.
Ordinal addition is associative - help with proof
This should be tagged elementary set-theory. First show that your definition of ordinal sum is equivalent to the following: $\alpha + \beta$ is the lexicographic order on $\{0\} \times \alpha \cup \{1\} \times \beta$ (a copy of $\alpha$ followed by a copy of $\beta$). Now associativity is obvious.
Congruence module algebra
use that if $$x\equiv 0,1,2,3,4\mod 5$$ then $$x^2\equiv 0,1,4\mod 5$$
A question about Green function in Laplace equations
Here's an outline First of all note that, by the maximum principle, there is a constant $c=c_n$ (the normalization constant in the definition of the fundamental solution for the laplacian), so that $|G(x,y)|\leq c|x-y|^{2-n}$. Therefore, by a variant of the dominated convergence theorem, it's enough to show that $$ \int_\Omega \frac{|f(y)|}{|x-y|^{n-2}} dy \to \int_\Omega \frac{|f(y)|}{|z-y|^{n-2}}dy, \qquad \text{ when } x\to z\in \partial \Omega. $$ This is easy to see: Clearly the integrands converge pointwise a.e. in $\Omega$, and moreover the functions $f_x(y)=|x-y|^{2-n}$ are uniformly bounded (in $x$) in $L^p(\Omega)$ for $p<n/(n-2)$, and thus they're weakly convergent as $x\to z$ by uniqueness of the pointwise and weak limits. But since $f\in L^\infty(\Omega) \subset L^{p'}(\Omega)$, we conclude the desired limit.
A sequence converging to zero but having a reciprical that does not got to $\infty$
Let $a_n=\dfrac{(-1)^n}n$ for $n\in\Bbb Z^+$; then $\lim\limits_{n\to\infty}a_n=0$, but $\lim\limits_{n\to\infty}\dfrac1{a_n}$ does not exist. For that matter, just let $a_n=0$ for all $n$: then $\dfrac1{a_n}$ isn’t even defined for any $n$. It is true, however, that if $\langle a_n:n\in\Bbb Z^+\rangle$ is a sequence of non-zero real numbers that converges to $0$, then $$\lim_{n\to\infty}\left|\frac1{a_n}\right|=\infty\;.$$
variance of quadratic forms when the random variables are not normally distributed
Writing $X^T A X = \sum_{i,j} A_{ij} X_i X_j$, we have $$ \text{Var}(X^T A X) = \sum_{i,j,k,l} A_{ij} A_{kl} \text{Cov}(X_i X_j, X_k X_l)$$ The covariance is $0$ if $\{i,j\}$ and $\{k,l\}$ are disjoint, but all the other terms are potentially nonzero. You'll have to consider various cases. Without knowing anything about the distribution of the $X_i$, there's not much more that can be said.
Limit of ratio of consecutive terms of a recurrence relation.
This is not true as stated. For example, consider $$ p_1=-1, p_2=0, p_3=1, k=3 $$ so $\lambda=\pm1$, and a solution $$ a_n= \begin{cases} 2 & n\text{ even}\\ 1 & n\text{ odd} \end{cases}. $$ Then $$\lim_{n\to\infty}\frac{a_{n+1}}{a_n}\tag{1}$$ does not exist, so cannot be equal to either $1$ or $-1$. Addendum: If the limit (1) exists, then it is one of the $\lambda_r$s. This is because the general solution of $a_n$ is a sum of $$ q_\lambda(n)\lambda^n $$ where $\lambda$ is a root of the characteristic (or auxiliary) polynomial $\sum_{r=0}^{k-1}p_{r+1}X^r$ and $q_\lambda$ is a polynomial (of degree one less than multiplicity of the root $\lambda$, in particular, is a constant if $\lambda$ is a simple root). So dividing and taking limit: $$ \frac{a_{n+1}}{a_n}=\frac{\sum_\lambda q_\lambda(n+1)\lambda^{n+1}}{\sum_\lambda q_\lambda(n)\lambda^n}\to\lambda_r $$ where $r$ is chosen such that: all $\lambda_j$ with $\lvert\lambda_j\rvert>\lvert\lambda_r\rvert$ has $q_{\lambda_j}\equiv 0$, and all $\lambda_j$ with $\lvert\lambda_j\rvert=\lambda_r\rvert$ has $\deg q_{\lambda_j}<\deg q_{\lambda_r}$ (the degree of the zero polynomial is -1).
Prove (n choose k) ((n − k) choose (m − k)) = (n choose m) (m choose k)
Hint. For the LHS, first choose the elements of $A$; then choose more elements for $B$ to make up the required number. For the RHS, first choose the elements of $B$, then from these elements, choose the elements of $A$. Good luck!
Proof marginal distribution of multivariate normal with mgf
Yes, your method is correct. The m.g.f. for a vector random variable $X$, if it is finite in a neighborhood of the origin, has packed in it the m.g.f. of any linear function of $X$. In your case, the $i$-th coordinate $X_i$ is a linear function of $X$. Suppose $X$ has m.g.f. $\psi(t) = E\exp(\langle t, X\rangle)$ and that $Y=AX$ for some matrix $A$. Then $\phi(u) = E\exp(\langle u, Y\rangle) = E\exp( \langle A' u, X\rangle) = \psi(A' u)$.