title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Navier-Stokes Formulation
The most generic form of the Navier-Stokes equation is $$ \frac{\partial (\rho \mathbf{u})}{\partial t} + \nabla \cdot(\rho \mathbf{u} \otimes \mathbf{u}) = \nabla p - \mathbf{f} + \nabla \cdot \mathbf{S}, \tag{1} $$ in which $\mathbf{S}$ is the shear stress tensor ($\mathbf{S}=\mu\nabla \mathbf{u}$ in your case). The continuity equation is $$ \frac{\partial \rho}{\partial t}+\nabla \cdot (\rho \mathbf{u})=0. \tag{2} $$ Using the product rule for derivatives in equation $(1)$ leads to $$ \rho \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u}\frac{\partial \rho}{\partial t} + \rho \mathbf{u} \cdot \nabla \mathbf{u} + \mathbf{u} \nabla \cdot(\rho \mathbf{u}) = \nabla p - \mathbf{f} + \nabla \cdot \mathbf{S}, \tag{3} $$ and using the continuity equation we see that the second and fourth terms in LHS cancel each other, leading to $$ \rho\left( \frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u} \right) = \nabla p - \mathbf{f} + \nabla \cdot \mathbf{S}. \tag{4} $$ Since it's usually assumed that the continuity equation holds, equation $(4)$ is completly equivalent to equation $(1)$. In practice, however, there are some "differences": Equation $(1)$ is called the conservative form of Navier-Stokes equation, while equation $(4)$ is called the non-conservative form. These names are a bit misleading: both equations are the momentum conservation equations. However, when solving numerically the governing equations of fluid dynamics, it's sometimes more useful to use equation $(1)$. It's basically due to the fact that accross a shock wave the velocity $\mathbf{u}$ is discontinuous (and, therefore, equation $(4)$ has the gradient of a discontinuous function), while $\rho \mathbf{u} \otimes \mathbf{u}$ is continuous even accross the shock. See that equation $(1)$ is called conservative form because it has derivatives of a conserved quantity (the momentum, i.e., $\rho \mathbf{u}$) while equation $(4)$ has derivatives of a non-conserved quantity (the velocity). Equation $(4)$ explicitly shows the transport of momentum in the term $\mathbf{u} \cdot \nabla \mathbf{u}$. Notice that the conservation equation of the property $\phi$ (which can be enthalpy, vorticity, chemical species, etc.) will have a term $\mathbf{u} \cdot \nabla \phi$. Therefore, equation $(4)$ "looks like" every other conservation equation. It's usually defined the material derivative of property $\phi$ as $$ \frac{D \phi}{Dt} = \frac{\partial\phi}{\partial t} + \mathbf{u} \cdot \nabla\phi, $$ which can be interpreted as the rate of change of the property $\phi$ along time in a particle of fluid while this particle is transported by the flow. Then, the conservation equation of any property $\phi$ can be written generically as $$ \frac{D \phi}{Dt} = \text{source terms}, $$ in which the source terms for the case $\phi=\mathbf{u}$ (i.e., the Navier-Stokes equation) are $(\nabla p - \mathbf{f} + \nabla \cdot \mathbf{S})/\rho$. Summarizing: both equations are the same. When you need to solve them numerically, equation $(1)$ can be more suitable. If you want to interpret the physical meaning of the terms of Navier-Stokes equation, equation $(4)$ is more suitable.
Expected price change decomposition from finance: bogus or can it be made rigorous?
This is an imprecise statement about the approximate expected return of a security price $P(t,x)$ that is a function of time $t$ and a random factor $x$. The authors either assume the reader has a background in stochastic calculus and can fill in the details or is willing to accept an intuitive argument. More rigorously, suppose, for example, the factor follows a stochastic process governed by the SDE $dx_t = \mu \, dt + \sigma \, dW_t$ where $W_t$ is a Wiener process. The process for $P$ can be derived using Ito's lemma, $$dP = \frac{\partial P}{\partial t} \, dt + \frac{\partial P}{\partial x} dx_t + \frac{1}{2}\frac{\partial^2 P}{\partial x^2}dx_t^2 + \ldots,$$ Since $dW_t^2 = \mathcal{O}(dt),$ this leads to $$\frac{dP}{P} = \frac{1}{P}\left(\frac{\partial P}{\partial t} + \mu\frac{\partial P}{\partial x} + \frac{\sigma^2}{2}\frac{\partial^2 P}{\partial x^2}\right) dt + \frac{1}{P} \frac{\partial P}{\partial x} dW_t $$ and the "expected return" or drift is $$\mathbb{E}\left[ \frac{dP}{P}\right]= \frac{1}{P}\left(\frac{\partial P}{\partial t} + \mu\frac{\partial P}{\partial x} + \frac{\sigma^2}{2}\frac{\partial^2 P}{\partial x^2}\right) dt $$ In writing, $$\mathbb{E}\left[ \frac{dP}{P}\right]= \frac1P\frac{\partial P}{\partial x}\mathbb E\left[dx\right] + \frac1P\frac{\partial P}{\partial t} dt$$ the authors are neglecting the contribution from the second-order partial derivative since $\mathbb{E}[dx_t] = \mu dt$.
anomaly in problem of independent shots
The target is undamaged when both shots miss. That each shot misses is independent of the other. The probability that both miss ($P(\neg A \cap \neg B)$ is $(1-0.9)(1-0.8) = 0.02$. There is a 2% chance the target is undamaged. Therefore, there is a ($P(A \cup B) = 1 - P(\neg (A \cup B)) = 1 - P(\neg A \cap \neg B)$), $1 - 0.02 = 0.98$, so 98% chance the target is damaged. (There are many notations for the complement of a state. I have used "$\neg$" above for the complement. Notice that we are using De Morgan's Laws to go from $\neg (A \cup B) = \neg A \cap \neg B$.)
if $(|N|,[G:N])=1$, then $G$ has a unique subgroup of order $|N|$
Let $K$ be another subgroup of $G$ such that $ |K| = |N| $ , then since $N$ is a normal subgroup of $G$ , therefore $NK$ will also be a subgroup of $G$. $|NK| = (|N||K|)/|N \cap K|$ But since $NK$ is a subgroup of G therefore by Lagrange's theorem $|NK|$ divides $|G|$ which from above equation implies $|N| = |K| = |N \cap K|$ (since $(|N| , [G:N]) = 1$ ) Hence $ N = K $ , that is $N$ is unique subgroup of order $|N|$.
The distribution of the minimum of two independent geometric random variables
Let $X\sim\mathcal{Geo}(p), Y\sim\mathcal{Geo}(q), X\perp Y$ $\begin{align} \Pr(X\geq k) & = (1-p)^{k-1} & \impliedby X\sim \mathcal{Geo}(p) \tag{1} \\[2ex] \Pr(Y\geq k) & = (1-q)^{k-1}& \impliedby Y\sim \mathcal{Geo}(q)\tag{2} \\[2ex] \Pr(\min(X,Y)\geq k) & = \Pr(X\geq k,Y\geq k) \\[1ex] & = \Pr(X\geq k)\Pr(Y\geq k) & \impliedby X\perp Y \\[1ex] & = (1-p)^{k-1}(1-q)^{k-1} & \impliedby (1)\wedge (2) \tag{3} \\[2ex] \Pr(\min(X,Y)= k) & = \Pr(\min(X,Y)\geq k) - \Pr(\min(X,Y)\geq k+1) \\[1ex] & = (1-p)^{k-1}(1-q)^{k-1} - (1-p)^{k}(1-q)^{k} \\[1ex] & = (p+q-pq)((1-p)(1-q))^{k-1} \\[1ex] & = (p+q-pq)(1-(p+q-pq))^{k-1} \end{align}$ Another approach. $X$ is the number of trials until a success with trial probability $p$, and $Y$ is the number of trials until a success with trial probability $q$, the $\min(X,Y)$ is the number of trials until either success; so it is geometric with trial probability $p+q-pq$ (the probability of the union). Then $\min(X,Y) \sim\mathcal{Geo}(p+q-pq)$
Integral u-substitution of u=$\frac{2x}{1+x^2}$
The two substitutions are both valid. Note the reciprocal relationship between the two choices $$\frac{1 + \sqrt{1 - u^2}}{u} =\frac{u}{1- \sqrt{1 - u^2}}$$ to see that they are not equivalent, though, with one being the inverse of the other.
Distribution into different groups if blank groups are not permissible
The formulation of statement $3$ is quite unclear. Reverse-engineering from the result, we can surmise that the intended meaning (perhaps indicated by the word “arrange”, as opposed to “distribute” in the other two statements) is to count the linear arrangements of $n$ elements in $r$ groups. There are $n!$ different arrangements of the $n$ elements, and in each of them we can place separators in $r-1$ out of $n-1$ possible positions between the elements to divide the linear arrangement into $r$ groups, for a total of $n!\binom{n-1}{r-1}$ grouped arrangements.
Ratios With Three Factors?
Rates are, by definition, a comparison of two different variables. However, if one wants do a comparison of more than two variables, we can take a lesson from multivariable calculus and compare multiple different rates (in your case, price vs. square feet, square feet vs. distance from town, and price vs. distance from town). Alternatively, you could assign different "weights" to different factors and compare it that way. For instance you could assign a score to each of the three factors, perhaps in the interval $[0,1]$, and conclude that the higher score is better at the end. The problem with this method, however, is that it introduces a certain subjectivity that math likes to pride itself on avoiding.
Hatcher's exercise 1.2.22 on the Wirtinger presentation
Here is a picture taken from Out of Line "Paths and Knot Spaces" There is more discussion at Topology and Groupoids p,350, and this simple crossing diagram in some sense assumes one is using the fundamental groupoid: insistence on one base points is not natural to the knot situation. I have demonstrated the crossing relation to children using a copper pentoil and rope, and ended up with this string wrapping on the pentoil: and asking one of the children to show how the loop comes off the knot!
Sturm-Liouville : $y''+\lambda y=0$, $y'(0)=0$ and $y(6)=0$
$y''+\lambda y=0$. Case $1$: $\lambda=0$ $y''=0\implies y(x)=Ax+B.\quad y(6)=0\implies6A+B=0\implies B=-6A\implies y(x)=A(x-6)$ $y'(0)=0\implies A=0$. So $\lambda=0$ is not an eigen value. Case $2$: $\lambda >0$. Let $\lambda=\alpha^2 \quad 0\ne \alpha \in \mathbb{R}$ $y''+\alpha^2 y=0\implies y(x)=A\cos \alpha x+B\sin \alpha x \quad y'(0)=0\implies B=0. \quad\therefore y(x)=A\cos \alpha x.\quad y(6)=0\implies A\cos6\alpha=0$ For non trivial solution we must have $\cos 6\alpha=0\implies 6\alpha=(2n+1)\frac{\pi}{2}\implies \alpha_n=(2n+1)\frac{\pi}{12}$. Correspondingly $\lambda_n=(2n+1)^2\frac{\pi^2}{144}$ which are the eigen values. Now find the corresponding eigen vectors. Then do Case $3$ for $\lambda<0$
How to proof that nested intervals are an equivalent relation?
It's true. I'll show transitivity. Define the relation between sequences of closed intervals as: $$\begin{align} ([a_n,b_n])_{n\in\Bbb N} \simeq ([c_n,d_n])_{n\in\Bbb N} \iff & \forall n\in\Bbb N: \\ &\quad a_n \le d_n, \text{ and }\tag{i} \\ &\quad c_n\le b_n.\tag{ii} \end{align}$$ A sequence of nested intervals is a sequence with the properties (a) and (b). These properties imply that there is a unique point $L$ in the intersection of the intervals, which moreover equals the sup of the left endpoints, the inf of the right endpoints, and the limit of both endpoint sequences: Any sequence of nested intervals $([a_n,b_n])_{n\in\Bbb N}$ converges to a limit $L$, in several ways: $$\begin{align}\tag{$\dagger$}\\ L &= \lim_n a_n = \sup_n a_n\\ &= \lim_n b_n = \inf_n b_n \\ &= \text{the unique element of }\bigcap_n [a_n,b_n]. \end{align}$$ This is fairly well known; proof or reference on request. Claim: $\simeq$-related sequences of nested intervals converge to the same limit: For sequences of nested intervals $([a_n,b_n])_{n\in\Bbb N}$, $([c_n,d_n])_{n\in\Bbb N}$, $$if ([a_n,b_n])_{n\in\Bbb N} \simeq ([c_n,d_n])_{n\in\Bbb N} ,\, then \, \lim_n a_n = \lim_n c_n = \lim_n b_n = \lim_n d_n.\tag{*}$$ Indeed, given such sequences, let $L=\lim_n a_n$; then we have: $$\begin{align} L = \lim_n a_n &\le \lim_n d_n\tag{by (i)} \\ &= \lim_n c_n\tag{by ($\dagger$)} \\ &\le \lim_n b_n\tag{by (ii)} \\ &= L.\tag{by ($\dagger$)} \\ \end{align}$$ Finally, Given sequences of nested intervals $$([a_n,b_n])_{n\in\Bbb N} \simeq ([c_n,d_n])_{n\in\Bbb N} \simeq ([e_n,f_n])_{n\in\Bbb N},\, $$ we have, for every $m\in\Bbb N$, $$ a_m \le f_m \text{ and } e_m\le b_m\tag{$result$}; $$ therefore, $$ ([a_n,b_n])_{n\in\Bbb N} \simeq ([e_n,f_n])_{n\in\Bbb N}. $$ By hypothesis, all six endpoint sequences converge to the same limit $L$, which also equals the $\sup$s and $\inf$s of left-and right-hand endpoint sequences respectively. Given $m$, we have, by (*), $$a_m \le \sup_n a_n = L = \inf_n f_n \le f_m,$$and similarly, $$e_m \le \sup_n e_n = L = \inf_n b_n \le b_m;$$hence the result. But that result, true for all $m$, says exactly that $\simeq$ holds between the sequences $([a_n, b_n])$ and $([e_n, f_n])$, so the conclusion follows.
Prove $\, _6F_5\left(\{\frac12\}_5,\frac{5}{4};\frac{1}{4},\{1\}_4;-1\right)=\frac{2}{\Gamma \left(\frac{3}{4}\right)^4}$ and another
The first identity can be written as $$ -\sum_{n\geq 0}\left[\frac{1}{4^n}\binom{2n}{n}\right]^4\frac{1}{(n+1)(2n-1)} +2\sum_{n\geq 0}\left[\frac{1}{4^n}\binom{2n}{n}\right]^4\frac{(2n+1)^3}{(n+1)^3(n+2)}=\frac{8}{\pi^2}$$ and this identity can be proved by reindexing and by considering the FL-expansions of $\left[x(1-x)\right]^\mu$ for $\mu\in\frac{1}{4}\mathbb{Z}$, as stated in the intro of this forthcoming article. I hope this will speed up the review process, more than a year has passed since the submission of this article, which might bring something useful to the table, i.e. the fact that fractional operators can be used together with standard operators in stressing the interplay between hypergeometric functions and Euler sums. Actually the first identity is equivalent to $$ 1 = \int_{0}^{1}\sqrt{x(1-x)}\frac{dx}{\sqrt{x(1-x)}} = \sum_{n\geq 0}\frac{c_{2n} d_{2n}}{4n+1} $$ where $$ \sqrt{x(1-x)}\stackrel{L^2(0,1)}{=}\sum_{n\geq 0}c_{2n}P_{2n}(2x-1),\qquad \frac{1}{\sqrt{x(1-x)}}\stackrel{\mathcal{D}}{=}\sum_{n\geq 0}d_{2n}P_{2n}(2x-1).$$ The second identity, involving $\left[\frac{1}{4^n}\binom{2n}{n}\right]^5$, is a consequence of Brafman's formula for $s=\frac{1}{2}$ and the evaluation of FL-expansions at $x=\frac{1}{2}$, together with the special value $K\left(\frac{1}{2}\right)=\frac{1}{4\sqrt{\pi}}\Gamma\left(\frac{1}{4}\right)^2$. Indeed $$ K(x)K(1-x)\stackrel{L^2(0,1)}{=}\frac{\pi^3}{8}\sum_{n\geq 0}\left[\frac{1}{4^n}\binom{2n}{n}\right]^4(4n+1)P_{2n}(2x-1) $$ and by evaluating both sides at $x=\frac{1}{2}$ $$ \sum_{n\geq 0}\left[\frac{1}{4^n}\binom{2n}{n}\right]^5(4m+1)(-1)^m = \frac{\Gamma\left(\frac{1}{4}\right)^4}{2\pi^4}.$$ Besides this, a closed form for the simple-looking $\phantom{}_4 F_3$ $$ \sum_{n\geq 0}\left[\frac{1}{4^n}\binom{2n}{n}\right]^4 $$ still eludes me.
Constructing a function that is continuous and has a max on an open interval, but is not necessarily increasing immediately to the left of the max.
Your original idea sounds good to me. For example, you could start with the function $$f(x) = \begin{cases}x \sin(1/x) & \text{if } x \ne 0 \\ 0 & \text{if } x = 0, \end{cases}$$ which is continuous on the entire real line and differentiable everywhere except at zero, and looks like this in the vicinity of zero: Then you just need to tweak $f$ to have a global maximum at $x = 0$, for example by subtracting $2|x|$ from it. The resulting function $g(x) = f(x)-2|x|$ looks like this in the vicinity of zero: Ps. If you want a counterexample which is even differentiable everywhere, try $$h(x) = x f(x) - 2x^2 = \begin{cases}x^2 \sin(1/x) - 2x^2 & \text{if } x \ne 0 \\ 0 & \text{if } x = 0, \end{cases}$$ which looks like this: Proving that $h$ is differentiable at zero is fairly easy from first principles, just by showing that the difference quotient $$ \frac{h(x)-h(0)}{x} = x \sin(1/x) - 2x $$ converges to $0$ in the limit as $x \to 0$, which in turn can be done using the squeeze theorem in exactly the same way as for showing the continuity of $f$ and $g$ at zero. Then you just need to show that the derivative of $h$ takes both positive and negative values on every interval with one endpoint at zero.
no quadratic extension of $\mathbb{Q}$ in $\mathbb{Q}[e^{\frac{2\pi i}{5}}]$?
$\theta$ is a fifth root of unity; $\mathbb{Q}(\theta) / \mathbb{Q}$ is an abelian extension. That is, it is a Galois extension with abelian Galois group. Every abelian group $G$ of order $n$ has, for every $m \mid n$, at least one subgroup $H$ of order $m$. Consequently, the extension $\mathbb{Q}(\theta) / \mathbb{Q}$ has at least one subextension of every degree dividing $[\mathbb{Q}(\theta) : \mathbb{Q}]$. There are two ways to produce the quadratic subextension. We can identify the quadratic extension by looking at the ramification in the ring of integers $\mathbb{Z}[\theta]$. For every $p$ except $5$, the algebraic closure of $\mathbb{F}_p$ has four distinct primitive roots of unity. However, over $\mathbb{F}_5$, every fifth root of unity is $1$. Consequently, the extension ramifies only over the prime $5$. The extension must be either $\mathbb{Q}(\sqrt{5})$ or $\mathbb{Q}(\sqrt{-5})$. Studying how ramification over $2$ works implies that we must be taking the square root of a number that is $1 \bmod 4$, thus the extension is $\mathbb{Q}(\sqrt{5})$. In general, for odd $p$, $\mathbb{Q}(\zeta_p)$ will contain either $\mathbb{Q}(\sqrt{p})$ or $\mathbb{Q}(\sqrt{-p})$ as a subfield; the correct square root is whichever $\pm p$ is $1 \bmod 4$. Another way to produce the quadratic subextension is to observe that $\mathbf{Q}(\theta)$ has complex embeddings, and that complex conjugation acts on the field. Thus, it has subfield fixed by complex conjugation. We can even identify the subfield as: $$\mathbb{Q}(\theta + \bar{\theta}) \subseteq \mathbb{Q}(\theta)$$ Since complex conjugation has order 2 (or simply by writing down the minimal polynomial of $\theta$ over the subfield), this field extension is order $2$, and thus $[\mathbb{Q}(\theta + \bar{\theta}) : \mathbb{Q}] = 2$.
How many sequences with $k$ different values less than $d$?
This will be a fair mess to compute. You can pick the specific $k$ values in $d \choose k$ ways. Now pick $m$, the number of elements less than $d$. For each $m$, you can pick the positions in the list in $n \choose m$ ways, the values of those positions in $k^m$ ways and the values of the other positions in $(\ell-m)^{(n-d)}$ ways. This gives the number of ways to have at most $k$ different values less than $d$ as $$\sum_{m=k}^\ell {d \choose k}{n \choose m}k^m(\ell-m)^{(n-d)}$$ Unfortunately, we have counted the ones with exactly $k-1$ values $d-k+1$ times each, so we need to subtract them. There are $$\sum_{m=k-1}^\ell {d \choose k-1}{n \choose m}(k-1)^m(\ell-m)^{(n-d)}$$ of these. Then we continue with those with $k-2$ values, following the inclusion-exclusion principle.
Show that $\sum\limits^\infty_{k=1}k^{-s}$ converges if and only if $s>1$ for positive $s$.
Use the Cauchy condensation test: Your series converges iff the series $$\sum_{k=1}^\infty 2^k \left(2^k \right)^{-s}=\sum_{k=1}^\infty 2^{k(1-s)} $$ converges - but the latter is a simple geometric series.
Derivative and divergence of measures, product of measures and vector field
A distribution is a linear functional on $C_c^\infty(\mathbb{R}^d),$ the space of infinitely differentiable functions with compact support. The action of a distribution $u$ on the test function $\varphi \in C_c^\infty(\mathbb{R}^d)$ is often denoted by $\langle u, \varphi \rangle.$ A measure $\mu$ induces a distribution by $\langle \mu, \varphi \rangle := \int \varphi \, d\mu.$ For $\mu : [0,1] \to P_2(\mathbb{R}^d)$ we define $\partial_t \mu_t$ by $$ \langle \partial_t \mu_t, \varphi \rangle := \frac{d}{dt} \langle \mu_t, \varphi \rangle = \frac{d}{dt} \int \varphi \, d\mu_t, $$ define $\mu_t v_t$ by $$ \langle \mu_t v_t, \varphi \rangle = \langle \mu_t, v_t \varphi \rangle = \langle \mu_t, (v_t^1, \ldots, v_t^d) \varphi \rangle = (\langle \mu_t, v_t^1 \varphi \rangle, \ldots, \langle \mu_t, v_t^d \varphi \rangle), $$ and $$ \langle \nabla\cdot(\mu_t v_t), \varphi \rangle = - \langle \mu_t, \nabla\cdot(v_t \varphi) \rangle = - \langle \mu_t, (\nabla\cdot v_t) \varphi + v_t \cdot \nabla\varphi \rangle . $$ Note: For the above to make sense with respect to the definition of distributions as being linear functionals on $C_c^\infty$ the vector field $v_t$ needs to be $C^\infty.$ But I guess that $\mu_t$ having finite second moment allows us to reduce the requirements on the test functions and on $v_t.$
Exponential Generating function for 1 0 0 1 0 0 1 0 0...
In general, if you have a formal power series $$f(x)=a_0+a_1x+a_2x^2+a_3x^3+\ldots$$ and you want to extract only the coefficients divisible by some $k$, you can use a trick with the $k^{th}$ roots of unity to cause cancellation at every other $k$. In particular, let $\zeta_{n,k}=e^{2\pi i\cdot n/k}$. The $k^{th}$ roots of unity are exactly $\zeta_{0,k},\zeta_{1,k},\ldots,\zeta_{k-1,k}$ and one has that $\zeta_{a,k}\cdot\zeta_{b,k}=\zeta_{a+b,k}$. It is relatively easy to show that $$\zeta_{0,k}^n+\zeta_{1,k}^n+\zeta_{2,k}^n+\ldots+\zeta_{k-1,k}^n=\begin{cases}k&\text{if }k\text{ divides }n\\0 &\text{otherwise} \end{cases}$$ Using this, one can figure out that if we define $$g(x)=\frac{1}k\cdot \left(f(\zeta_{0,k}x) + f(\zeta_{1,k}x)+f(\zeta_{2,k}x)+\ldots+f(\zeta_{k-1,k}x)\right)$$ we will have that the power series of $g$ is just the same as $f$, except with every term not divisible by $k$ removed. Whether or not we are considering this a generating function or exponential generating function is irrelevant to this argument.
Find and classify the critical points
Hint No, from the first equation, you get $y = \dfrac{-x^2 + 21}{4}$. Substitute that into the second and you should get $\frac{1}{16} \left(x^4-42 x^2+64 x+105\right) = 0$. This gives four $x$ values of $x = -7, -1, 3, 5$. Can you continue with the classification? Note, not all of the critical points may yield a classification, but you would test them all to find out.
Estimate for discrete convolutions
When we divide $A_{n}$ by $(n+1)^{\alpha+\beta+1}$, we have $$ \frac{A_{n}}{(n+1)^{\alpha+\beta+1}} = \frac{1}{n+1}\sum_{m=1}^{n}\left(1-\frac{m}{n+1}\right)^{\alpha}\left(\frac{m}{n+1}\right)^{\beta} $$ and RHS converges to $$ \int_{0}^{1}(1-x)^{\alpha}x^{\beta} dx = B(\alpha+1, \beta+1) $$ which is a Beta function. Hence we have $A_{n}\sim n^{\alpha+\beta+1}$.
Recurrence relation lab modelling
Let $p_n$ be the probability that you take your assigned seat if you are the last of $n$ students; clearly $p_2=\frac12$. Now suppose that you’re the last of $n+1$ students. If the first student takes that student’s assigned seat, each of the remaining $n$ students, including you, will take his or her assigned seat; this occurs with probability $\frac1{n+1}$. Now suppose that the first student takes an incorrect seat. If that seat is your assigned seat, you definitely will not get your assigned seat, so we can limit ourselves to the case in which the first student takes an incorrect seat that is not yours. Suppose that the first student takes seat $k$, where $1<k<n+1$. Then students $2,3,\ldots,k-1$ will take their assigned seats, and seats $1$ and $k+1,\ldots,n+1$ will be available for student $k$. Claim: The probability that you will get your assigned seat is $p_{n-k+2}$. To see why this is so, note that the first $k-1$ students have taken seats, so you are the last of the $(n+1)-(k-1)=n-k+2$ students who have not yet taken a seat. Student $k$ is the first of this remaining group of $n-k+2$ students. Temporarily relabel seat $1$ as seat $k$. Then we have a group of $n-k+2$ students, each of whom has a correct seat, the first student in the group (i.e., student $k$) must choose one at random, and you are the last student in the group. By definition $p_{n-k+2}$ is the probability that in this situation you will get your assigned seat. The probability that the first student takes seat $k$ is $\frac1{n+1}$, so $$\begin{align*} p_{n+1}&=\frac1{n+1}+\sum_{k=2}^n\frac{p_{n-k+2}}{n+1}\\ &=\frac1{n+1}\left(1+\sum_{k=2}^np_{n-k+2}\right)\\ &=\frac1{n+1}\left(1+\sum_{k=2}^np_k\right)\;, \end{align*}$$ where in the last step I’ve substituted $k$ for $n+2-k$: as $k$ runs from $2$ through $n$, $n+2-k$ runs from $n$ down through $2$. This is your recurrence: it expresses $p_{n+1}$ in terms of the values $p_k$ for $2\le k\le n$. If you recursively calculate $p_n$ for a few small values of $n\ge 2$, you should easily be able to conjecture a closed form for for $p_n$ and prove it by induction. This question and its answers will let you confirm that your closed form is correct and show some other ways to arrive at it.
Collinearity of three points
As you write, vectors $a,b,c$ are coplanar iff there are scalars $x,y,z$, not all $0$, such that $xa+yb+zc=0$, i.e. iff they are linearly dependent. The endpoints of $a,b,c$ are collinear iff $c-b$ is parallel to $c-a$, that is, $c-a=t(c-b)$ for some scalar $t$, assuming $b\ne c$. But then $1a+(-t)b+(t-1)c=0$ and these coefficients sum up to $0$. Conversely, if $xa+yb+zc=0$ with $x+y+z=0$, then either $x=0$ whence $y=-z$ and $b=c$, or we can divide by $x$ and set $t=-y/x$ to conclude $c-a=t(c-b)$.
Which function "loops" after deriving it $n$ times?
Your condition is a linear homogeneous differential equation: $$ y^{(n)} - y = 0 $$ It has the characteristic equation $$ z^n - 1 = 0 $$ with the $n$ complex unit roots $\lambda_k = e^{2\pi k i/n}$, $k \in \{0,\dotsc, n-1\}$, as solutions. As a linear differential equation, the general solution is a linear combination of solutions. In this case $$ y(x) = \sum_{k=0}^{n-1} C_k e^{\lambda_k x} $$
Exponential map generates the identity component of closed linear groups
The image of the exponential map isn't a subgroup in general, it only generates the identity component. See here: (non?)-surjectivity of the exponential map to $SL(2,\mathbb{C})$
Prove $f(x)=\|x\|$ differentiable everywhere but in $\{0\}$
The differential is given by, $$Df(x_1,x_2) = \left( \frac{x_1}{\|x\|}, \frac{x_2}{\|x\|} \right)$$ which has defined for $x \in \mathbb{R}^2 \setminus \{0\}$
How to compute the radius of convergence of $\sum_{n=0}^{\infty}{4^n\cdot x^{2^n}}$
$$\frac{4^{n+1}x^{2^{n+1}}}{4^nx^{2^n}}=4x^{2^n}$$ ensures convergence for all $$|x|<1$$ and divergence for $$|x|\ge1.$$
Prove that the derivative is unique
Hint: If $f(x)-f(c)=A_1(x-c)+\eta_1(x)=A_2(x-c)+\eta_2(x)$ then $ A_1-A_2=\frac{\eta_2(x)-\eta_1(x)}{x-c}$. Now take the limit as $x\to c$.
Can every infinite set be partially ordered in a way that does not have maximal elements?
The answer given by quasi is correct - if we assume the axiom of choice. Without choice, things get more complicated: there can be infinite sets which don't admit injections from $\mathbb{N}$ (that is, such that we can't in fact "pick distinct $x_1,x_2, ...$" to set up the situation in quasi's answer). In fact, it turns out to be consistent with ZF (= set theory without choice) that there are infinite sets which cannot be partially ordered without maximal elements. The proof of this is via forcing, which is unfortunately too complicated to get into here. EDIT: In particular, any amorphous set has this property. Suppose $A$ is amorphous and $<_A$ is a partial ordering of $A$ with no maximal element. Then for each $a\in A$, the set $\{x\in A: a<_Ax\}$ is infinite, hence cofinite since $A$ is amorphous. Think about the function $d$ sending each $a\in A$ to $d(a)=\vert\{x\in A: a\not<_Ax\}\vert$. By the above, $d:A\rightarrow\mathbb{N}$, and it's clear that the range of $d$ is unbounded (if $x<_Ay$ then $d(x)<d(y)$). If we partition the range of $d$ into two infinite pieces (this can be done, since the range is infinite and well-orderable), then pulling this partition back along $d$ gives a partition of $A$ into two disjoint infinite sets, contradicting the amorphousness of $A$.
Not quite alternating series
You can split it up into two alternating power series. Consider the values for even $n$ as one, and for odd $n$ as the other: $$\sum_{n \geq 0}(-1)^{n(n+1)/2}a_nx^n = \sum_{n \geq 0}(-1)^{2n}a_{2n}x^{2n} + \sum_{n \geq 0}(-1)^{2n+1}a_{2n+1}x^{2n+1}.$$
Why do the characters of an abelian group form a group?
The inverse is given by taking the dual representation, which has character the complex conjugate of the original character. In the $1$-dimensional case this is the inverse of the original character.
$A\subseteq B\;\wedge B\cap C\subseteq A\overset{?}\implies C^c\cap A\subseteq B^c$
Technically, you haven’t shown that it is possible to have $A\not\subseteq C$, and still meet the hypothesis. You should include an example. A simple example would be to have: $$A=\{2,3\}$$ $$B=\{2,3,4\}$$ $$C=\{1,2\}$$ Then you have the hypothesis without the conclusion, which means the implication is false.
Path connectedness and locally path connected
One counterexample is a variant on the famous topologist's sine curve. Consider the graph of $y = \sin(\pi/x)$ for $0<x<1$, together with a closed arc from the point $(1,0)$ to $(0,0)$: This space is obviously path-connected, but it is not locally path-connected (or even locally connected) at the point $(0,0)$.
Angles of triangle inside a cricle
Given that the area of the circle is $100\pi$, its radius $OC$ is $10$. Using Pythagoras Theorem one finds that $OA=8$. From there you may work out $AB$ and use the inverse trigonometric functions to find the angles.
$I-cP$ Invertible Matrix
Guide: \begin{align}(I-cP)(I+dP)&=I-cP+dP-cdP^2 \\ &=I+(-c+d-cd)P\end{align} We just have to solve for $d$ in $$-c+d-cd=0$$ I will leave this task to you.
Every n × n-matrix A with real entries has at least one real eigenvalue.
Nope. Try a rotation in $\Bbb R^2$.
Spaces of (complete) separable metric spaces
there are only continuum-many complete separable metric spaces Correct. It's not necessary to appeal to those theorems. A countable metric space $C$ is determined up to isometry by its infinite distance matrix $D_{ij} = d(c_i, c_j)$, $i, j\in\mathbb{N}$. There are continuum-many such matrices. The map sending each countable metric space to its completion is a surjection onto the space of all complete separable spaces (up to isometry). (1a) The Gromov–Hausdorff limit $X$ of a sequence of separable metric spaces $X_n$ is separable. Indeed, for each $\epsilon$ we can pick $n$ such that $d_{GH}(X, X_n)<\epsilon$, and a countable $\epsilon$-net in $X_n$. Throw this net over to $X$ (pick some neighbor for each point), and you'll get a countable $(3\epsilon)$-net for $X$. Repeat for $\epsilon = 1/k$, take the union. (1b) Completeness should be required of all spaces, otherwise the Gromov–Hausdorff distance is not a metric. Indeed, the distance between a space and its completion is always zero. So we may as well quotient out all non-complete spaces, replacing them by their completion. For a concrete example, $X_n = [0, 1-1/n]$ converge to $X=[0, 1)$ (all spaces being considered with Euclidean metric). Of course $X$ is not complete. Then again, the sequence also converges to $[0, 1]$. There is no reason to consider $[0, 1)$ in this context. (1c) No, the space of complete separable spaces contains an uncountable uniformly separated subset. One way to construct such a family of spaces is to define, for each subset $A\subset \mathbb{N}$, $$ X_A = \{0\} \cup A $$ Then $X_A$ and $X_B$ are not isometric unless $A=B$. Indeed, $0$ is distinguished in $X_A$ as the only point $x$ that is not "between two others", i.e., $d(x, y)+d(x, z)=d(y, z)$ never holds unless $x=y=z$. So an isometry must send $0$ to $0$. But then distances from $0$ determine where everything else goes. Recall that $d_{GH}(X, Y) = \frac12\inf_R \operatorname{dis}(R)$ where the infimum is taken over all left-right-total relations $\sim$ between $X$ and $Y$, and $$\operatorname{dis}(R) = \sup \{|d_X(x, x') - d_Y(y, y')| : x\sim y, x'\sim y'\}$$ A proof is in Burago-Burago-Ivanov and also here. In our case, all distances in $X_A$ and $X_B$ are integers. So, the distortion of a relation cannot be less than $1$ unless the relation is an isometry. In conclusion, $$ d_{GH}(X_A, X_B)\ge \frac12,\quad \text{ whenever }\ A\ne B $$ The distance may be infinite in general, but there are also uncountably many sets $A$ within finite distance of each other: for example, the sets that contain all odd positive integers.
The Automorphism Group $\Gamma(\mathbb{Q}(\sqrt[n]{2}):\mathbb{Q})$ is trivial if $n$ is odd.
The $\mathbb Q$-automorphisms of $F_n$ are completely determined by where they send $\sqrt[n]2$, and they must send it to another $n$-th root of $2$. But if $n$ is odd, then no other $n$-th root of $2$ is in $F_n$, and if $n$ is even, then only one other $n$-th root of $2$ is in $F_n$ (you might need to prove these facts beforehand). So there are only one/two possible $\mathbb Q$-automorphisms.
Is $\pi$ or $e$ algebraic over $\mathbb R$?
Hint: What about $f(x) = x - e$.
A simple question on simplex
Let me try to expand (and fix!) my comment into an answer: If $a=a_0+\cdots+a_n$, I claim that the similarity $$ \begin{array}{rccc} F:&\mathbb{R}^n&\longrightarrow&\mathbb{R}^n\\ &v&\longmapsto& -nv+a \end{array} $$ sends $S$ to $B$. To prove this we shall see that $F(a_i)$ lies in all the hyperplanes except for $L_i$, which means that $F(a_i)$ is a vertex of $B$. Let $j\neq i$, is it true that $F(a_i)\in L_j$? This happens iff $F(a_i)-a_j$ is parallel to the hyperplane defined by $\{a_0,\ldots,a_n\}\setminus\{a_j\}$, which is the hyperplane generated by the $n-1$ vectors $$a_0-a_i,\ldots,\widehat{a_j-a_i},\ldots,a_n-a_i.$$ Adding all of them up gives $a-a_i-a_j-(n-1)a_i=-na_i+a-a_j=F(a_i)-a_j$.
Spacing between largest eigenvalues in the GUE
A detailed answer to this question is given in this paper by Perret and Schehr.
In the definition of a limit for a metric space, what is epsilon?
A metric space is a set with a topology induced by a distance. Usually, a distance is a function $d:X\times X\rightarrow [0,\infty)$ with certain properties. Here, $d(x_n,x)\in [0,\infty)$ so $\epsilon\in (0,\infty)$. The definition of convergence simply states that no matter how much you choose to be close to $x$ (hence you write for an arbitrary distance $\epsilon >0$) you will find that all elements of the sequence starting from a certain index $N$, are in fact within that arbitrarily (small) distance.
Matrix norms and spectral radius
OK. Regarding your first question, the difference I see is that Frobenius norm is a matrix norm while a matrix 2-norm is induced by the vector 2-norm, i.e., $\|A\|_2=\max_{\|x\|=1}\|Ax\|_2$. In fact, $\|A\|_2$ is the maximal singular value of $A$, that is, the square root of the maximal eingenvalue of $A^TA$ (this more computable). This also answer your second question. In general the spectral radius of a matrix is less or equal than the matrix norm.
Let $f_n\in\mathcal C(0,1)$, $f_n\xrightarrow{\mathrm{unif}}f$ on every compact $K\subseteq(0,1)$. Is $f$ uniformly continuous on $(0,1)$?
No, $f$ need not be uniformly continuous on $(0,1)$. Let $f_n=\left\{\begin{array}{ll}n+1&\text{if }x<\frac1{n+1}\\\frac1x &\text{if }x\geq \frac1{n+1}\end{array}\right.$. Then $f_n\xrightarrow{\text{unif} }f$ on every compact subset of $(0,1)$ where $f(x)=\frac1x$. But $f$ is not uniformly continuous on $(0,1)$.
$3^n-1$ is divisible by $4 \implies n $ is even
We have that $$3^n-1\equiv (-1)^n+1\mod 4$$ and thus $$4|3^n-1\iff n=2k$$
Generic point and pull back
Recall that finite morphisms are closed, so that closed points of $X$ map to closed maps of $Y$. On the other hand the generic point of $X$ (the only non closed point) cannot be sent to a closed point $y\in Y$, else all points of $X$ would be sent to $y$ too, by continuity of $f$, and $f$ would be constant and thus certainly not finite. All in all we have proved that $$f^{-1}(\{\eta_Y\})=\{\eta_X\}$$ Edit Since you are interested in a generalization, here is one: Given a morphism of completely general integral schemes $f:X\to Y$ we have the equivalence $$ f \operatorname {is dominant}\: (i.e.\operatorname {has dense image}) \iff f(\eta_X)=\eta_Y$$ If moreover $f$ is proper (for example projective or finite) its image is closed and we thus deduce:$$ f (X)=Y\iff f(\eta_X)=\eta_Y $$ Be careful that this last equivalence does not say that $f^{-1}(\{\eta_Y\})=\{\eta_X\}$ as shown by the unique $k$-morphism $$f:X=\mathbb P^1_k\to Y=\operatorname {Spec}(k)$$ for which $f^{-1}(\{\eta_Y\})=\mathbb P^1_k$
How to find $R$ if the maximum value of $x-y+z$ under the restriction $x^2+y^2+z^2=R^2$ is $\sqrt{27}$?
You could solve all 3 equations symmetrically as $$c=\frac{1}{2x}=-\frac{1}{2y}=\frac{1}{2z}. $$ This gives $$x=-y=z, $$ and the equation of the sphere gives $$3x^2=R^2 \implies x= \pm R/\sqrt3 .$$ The two critical points are then $$ \pm R/\sqrt{3}(1,-1,1).$$ I'll leave it to you to find which one is the maximum.
Related rates of change, the thickness of a cylinder related to the radius
The volume of oil slick will be constant when it spreads on water, since nothing extra is being added. Now $V= \pi r^2h$. assuming oil slick to be cylindrical therefore , differentiating with respect to time- $dV/dt=\pi ( 2rh.dr/dt+ r^2.dh/dt)$ since volume is not changing therefore $dV/dt=0$ and hence our expression reduce to- $2h.dr/dt=-r.dh/dt$. here $-dh/dt$ represents the rate at which height of slick or thickness of slick is decreasing. Now since question provides you the value of rate of increase of increase of radius of slick and thickness of slick when radius = $150m$, plug in them to get the answer as follows $-dh/dt=(2h/r).dr/dt=2*(0.01m)*(0.2m/min)/(150m)=2.66*10^{-5}m/min$
Half exact functor which is neither right exact nor left exact
A common source of half-exact functors which are neither right-exact nor left-exact is derived functors, which are always half exact by the long exact sequences relating them but are rarely right-exact or left-exact. For instance, if $n>0$ and $A$ is any object of projective dimension $>n$, then the functor $\operatorname{Ext}^n(A,-)$ is half-exact but neither right-exact nor left-exact. Here's a particularly simple example. Let $\mathcal{C}$ be the category of chain complexes of vector spaces (over your favorite field $k$), let $\mathcal{D}$ be the category of graded vector spaces, and let $F:\mathcal{C}\to\mathcal{D}$ take a chain complex to its homology. Then $F$ is half-exact by the long exact sequence in homology associated to a short exact sequence of chain complexes. But $F$ is neither right-exact nor left exact, precisely because the connecting homomorphisms in those long exact sequences can be nontrivial. Explicitly, if $A$ is the chain complex $0\to 0\to k \to 0$, $B$ is the chain complex $0\to k\stackrel{1}\to k\to 0$, and $C$ is the chain complex $0\to k\to 0\to 0$, then the obvious short exact sequence $0\to A\to B\to C\to 0$ is taken by $F$ to a sequence which is exact only in the middle. For a simple example of an additive functor which is not half-exact, consider the functor $F:Ab\to Ab$ which sends an abelian group to its subgroup of elements divisible by $2$. To see that it is not half-exact, consider what it does to the short exact sequence $0\to 2\mathbb{Z}\to\mathbb{Z}\to\mathbb{Z}/2\to 0$.
$\frac{1}{a_n}\int_0^{a_n} f(x) \,dx \rightarrow f(0)$ if $a_n\rightarrow 0$
Since $f(a_n) \rightarrow f(0)$ for any sequence tending to $0$, also $$M_n= \max\{f(x)| 0<x<a_n\}\rightarrow f(0)$$ Similarly for $m_n$, the minimum. Then $$m_n\int_0^{a_n} dx = m_n a_n \le \int_0^{a_n} f(x) dx\le M_n\int_0^{a_n} dx = M_n a_n$$ Now divide by $a_n$ and let $n\rightarrow \infty$
Show equality of polygonal commodity
I can't duplicate the post, so I will expect a different solution from mine now: Let $ z $ be the complex representative of $ P $ in the complex plane, and $ A_1, A_2, ... A_n $ the nth roots of $ r $, representing the geometric situation of the problem. With this: $ A_1 = r \in \mathbb {R} , OP = z \in \mathbb {R} $ Calculating the product: $(PA_1) \cdot(PA_2) \cdot (PA_3) \cdot....\cdot(PA_n)$ $$(PA_1) \cdot(PA_2) \cdot (PA_3) \cdot....\cdot(PA_n)=$$ $$=|r-z| \cdot \left|r \cdot cis\left(\frac{2\pi}{n}\right)-z\right| \cdot \left|r \cdot cis \left(\frac{4 \pi}{n}\right)-z\right|\cdot....\cdot\left|r \cdot cis \left(\frac{2(n-1)\pi}{n}\right)-z\right|$$ $$=|r-z| \cdot |w-z|\cdot |w^2-z|\cdot ....\cdot |w^n-z|$$ $$|(z-r)\cdot (z-r \cdot w)\cdot (z-r \cdot w^2)\cdot ....\cdot (z-r \cdot w^n)|$$ Consider the polynomial: $ P (Z) = Z ^ n-r ^ n $ that has roots: $$P(Z)=z^n-r^n=0 \Leftrightarrow z^n=r^n \Leftrightarrow z= r \cdot cis\left(\frac{2k \pi}{n}\right), k=0,1,...,n-1$$ Factoring: $$z^n-r^n=(z-r) \cdot (z-r \cdot w) \cdot (z- r \cdot w^2) \cdot... \cdot (z- r \cdot w^n)$$ And with that, remembering that $ z ^ n, r ^ n \in \mathbb {R} $ since $ z, r \in \mathbb {R} $: $$(PA_1) \cdot(PA_2) \cdot (PA_3) \cdot....\cdot(PA_n)=|z^n-r^n|=OP^n-r^n$$
Why does cross product tell us about clockwise or anti-clockwise rotation?
If you use the cross product of $\vec{AB}\times \vec{AC}$ or of $\vec{AC}\times \vec{AB}$, the sign will be opposite due to the definition of the cross section. Thus you can determine in what direction you must turn around $A$ to reach $C$ from $B$ by looking at the sign of the cross product. In terms of angles if $\vec{AB}$ and $\vec{AC}$ are in the $xy$ plane : $$\vec{AB}\times \vec{AC} = (|AB||AC|\sin\theta) \hat{z}$$ $$\vec{AC}\times \vec{AB} = (|AB||AC|\sin(-\theta))\hat{z} = -(|AB||AC|\sin(\theta))\hat{z}$$ Thus the angle becomes negative when you switch the direction - it's a bit like saying that to get from 12 o'clock to 3 o'clock you need to go $90^\circ$, but to go from 3 o'clock to 12 o'clock you need $270^\circ = 270^\circ - 360^\circ = -90^\circ$.
binomial problem : Executive Airlines
I threw together a truth table that I think answers your questions: I show all possibilities for the 5 undecided passengers. In the "over?" column I show whether or not that situation leads to turned-away passengers. There are six out of 32 for an 18.75% chance at least one person will be turned away. In the "empty seats?" column I show true if the number of undecided passengers who show up is less than 3. That happens 16 times out of 32 for a 50% chance. In the last column I show the expected number of passengers turned away as 0.21875. I'm not sure I calculated this right however, but this will get you most of the way there.
Why are two base cases needed to prove that $n<2^n$ for all $n\geq 0\,$?
This is an interesting, and rather unfortunate, problem in a way because the answer really depends on how you interpret the question (and, of course, ambiguity is not exactly something desired in mathematics). The two possible interpretations: In the proof just given (i.e., the one in your picture), why do you need the two base cases $n=0$ and $n=1$? Or: Why do you need the two base cases $n=0$ and $n=1$ in order to prove that $n&lt;2^n$ for all $n\geq 0$? The two current answers address point (1) above, and the comment by Git Gud addresses (2). I'll try to give reasonable responses to both interpretations. (1): You are trying to prove that $n&lt;2^n$ holds for all $n\geq 0$, right? As SBareS notes, your induction assumption is only for values $n\geq 1$. This means that whatever you prove will only be valid for $n\geq 1$. Thus, in the proof you pictured, you need the base case $n=0$ in order for the statement you proved to be valid for all $n\geq 0$ and not just $n\geq 1$. Of course, you need the base case $n=1$ in order for your induction proof to actually be a valid induction proof. Hence, you need both base cases $n=0$ and $n=1$ in the proof you pictured. (2): You do not need both base cases to prove that $n&lt;2^n$ for all $n\geq 0$. In fact, the proof you have pictured is pretty bad because it is not very well-written and it also makes use of an unnecessary base case to confuse matters more. I'll outline a proof below that shows how you can prove $n&lt;2^n$ for all $n\geq 0$ using only the base case $n=0$. Proof using only one base case: For $n\geq 0$, let $S(n)$ denote the statement $$ S(n) : n&lt;2^n. $$ Base case ($n=0$): $S(0)$ says that $0&lt;1=2^0$, and this is true. Induction step: Fix some $k\geq 0$ and assume that $S(k)$ is true where $$ S(k) : \color{blue}{k&lt;2^k}. $$ To be shown is that $S(k+1)$ follows where $$ S(k+1) : k+1&lt;2^{k+1}. $$ Beginning with the left-hand side of $S(k+1)$, \begin{align} \color{blue}{k}+1 &amp;&lt; \color{blue}{2^k}+1\tag{by $S(k)$, the ind. hyp.}\\[0.5em] &amp;\leq 2^k+2^k\tag{since $k\geq 0$}\\[0.5em] &amp;=2\cdot 2^k\tag{group like terms}\\[0.5em] &amp;=2^{k+1},\tag{by definition} \end{align} we end up at the right-hand side of $S(k+1)$, completing the inductive step. By mathematical induction, the statement $S(n)$ is true for all $n\geq 0$. $\blacksquare$ The above proof is perfectly valid, and it makes use of only the base case $n=0$. But I don't understand why two base cases are needed in the below example. Maybe now you can see why the two base cases are needed in the specific example/proof you showed in the picture (addressed in point (1)) but that two base cases are not needed to prove that $n&lt;2^n$ for all $n\geq 0$ (addressed in point (2) and in the proof above).
Show that $f$ is increasing without the first derivative test.
$\exp(x)= \sum_{i=0}^\infty(x^i/i!))$, so your expression evaluates to $\sum_{i=0}^\infty(x^i/(i+2)!))$, each term of the summation is monotonic, so the sum is monotonic.
Is every convergent sequence Cauchy?
Well, the definition of Cauchy sequence can be given in any metric space $(X,d)$ as Wikipedia points out, while the notion of converging sequence requires only a topology on a set to be well-defined (see here). So, if you can understand the sketch of proof given by Wikipedia and write it down rigourously, you'll see that it works for every metric space: you just have to substitute the absolute value of the difference of two real numbers, which is the standard metric on $\mathbb{R}$, with the given distance for an arbitrary metric space. $L^{p}$ does not make any difference, since it is a metric space with the distance induced by $\vert\vert\cdot\vert\vert_{p}$ norm.
Question on the definition of global points in category.
I'm going to assume your concern is that you are worried that every element of $A$ gives rise to an arrow $a : 1 = \{*\} \to A$ it also gives rise to an arrow $a' : 1' = \{\star\} \to A$. In other words, we have an arrow into $A$ for every pair of an element of $A$ and a terminal object. This actually forms a proper class (unless $A = \emptyset$). Basically, what's happening is we choose one particular object to be the terminal object which we notate as $1$ typically. Then the global elements of $A$ are the elements of $\Gamma(A) \equiv \mathbf{Set}(1,A)$. That terminal objects are isomorphic means they are interchangeable with respect to all categorical properties. In other words, there's no way to tell which choice we chose. That they are further isomorphic by a unique isomorphism means if we do want to make a different choice, everything transforms in a canonical way. In other words, we have no choice in how things change when we make a different choice for the terminal object. So "global elements" doesn't mean arrows from all terminal objects, just arrows from one, and which it is doesn't matter.
Find Two Closest Solutions
Changing $e^z = t$ we have $e^t = 1$ so $t=2\pi i k$ with $k$ integer, i.e. $e^z = 2\pi ik$ with $k$ integer. If $k=0$, $e^z=0$ has no solution, so if $k\neq 0$ $$ z = \ln|2\pi ik| + \arg(2\pi i k)i = \ln(2\pi |k|) + \arg(2\pi i k)i $$ Simplifying $$ z = \begin{cases} \ln(2\pi k) + \pi i/2 &amp; k &gt;0\\ \ln(-2\pi k) - \pi i/2 &amp; k&lt;0 \end{cases} $$
What does it mean for a Coxeter system to be of "spherical" type?
Let $\Gamma$ be a Coxeter graph and let $(W,S)$ be the Coxeter system of $\Gamma$. We say $\Gamma$ is $\textit{of spherical type}$ if $W_{\Gamma}$ is finite. Note that if $\Gamma_1,\ldots, \Gamma_{\ell}$ are the connected components of $\Gamma$, then $W=W_{\Gamma_1}\times \cdots \times W_{\Gamma_{\ell}}$. In this case, $\Gamma$ is of spherical type if and only if each of its connected component is of spherical type. Some properties are: A Coxeter graph $\Gamma$ is of spherical type if and only if the symmetric bilinear form $b:V\times V\rightarrow \mathbb{R}$ is positive definite, where $V$ is a representation space of $W$. The connected spherical type Coxeter graphs are precisely the following: $\hspace{6cm}$ Note: the image has been taken from Mike Davis' slides on &quot;Examples of Groups: Coxeter Groups&quot;.
Riemann rearrangement theorem
A complex series converges if and only if the real and imaginary parts converge, and an identical statements holds when taking absolute values. Then if $\sum_{n=1}^{\infty} c_{n}$ is convergent, so are $\sum_{n=1}^{\infty} a_{n}$ and $\sum_{n=1}^{\infty} b_{n}$ where $c_{n} = a_{n} + ib_{n}$. If $\sum_{n=1}^{\infty} |c_{n}|$ diverges, at least one of $\sum_{n=1}^{\infty} |a_{n}|$ of $\sum_{n=1}^{\infty} |b_{n}|$ diverges, and suppose without loss of generality that only one converges, and that it is the former. Then we can force the real part of our series, $\sum_{n=1}^{\infty} a_{n}$ to converge to whatever we want by the original Riemann Rearrangement Theorem. If $\sum_{n=1}^{\infty} |b_{n}|$ converges, we might be stuck with a fixed sum for the complex part, but we can still hit the entire horizontal line through $i\sum_{n=1}^{\infty} b_{n}$. If both the complex and real parts are conditionally convergent, the whole situation becomes more complicated...
Proving a result for cohomology for real projective plane
Note that the projection $\phi:\Bbb Z/4\to \Bbb Z/2$ induces a homomorphism on the cohomology rings. Consider the map $$g:H^*(\Bbb RP^2;\Bbb Z/4)\to H^*(\Bbb RP^2;\Bbb Z/2)$$ given by composing a cohomology class with the coefficient map. By UCT, the map in dimension $1$, i.e., $f:H^1(\Bbb RP^2;\Bbb Z/4)\to H^1(\Bbb RP^2;\Bbb Z/2)$, is equiv to $\operatorname{Hom}(H_1(\Bbb RP^2;\Bbb Z),\Bbb Z/4)\to \operatorname{Hom}(H_1(\Bbb RP^2;\Bbb Z),\Bbb Z/2)$. Note that for any $\psi\in\operatorname{Hom}(H_1(\Bbb RP^2;\Bbb Z),\Bbb Z/4)$, $\psi:\Bbb Z/2\to\Bbb Z/4$ has two possibilities: $1\mapsto 2$ $1\mapsto 0$ Then, $\phi\circ\psi:H_1(\Bbb RP^2;\Bbb Z)\to \Bbb Z/2$ is trivial because the two possibilities of $\psi$'s images are in $\ker(\phi)$. So, $f$ is a trivial homomorphism, which means $g(\alpha\smile\alpha)=g(\alpha)\smile g(\alpha)=0$ if $\alpha$ generates $H^1(\Bbb RP^2;\Bbb Z/4)$,. Consider $\Bbb RP^\infty$.The embedding $\Bbb RP^{2}\hookrightarrow \Bbb RP^\infty$ induces isomorphism $H^i(\Bbb RP^2;\Bbb Z/4)\cong H^i(\Bbb RP^\infty;\Bbb Z/4)$ for $0\le i\le 2$. So $\alpha$ also generates $H^1(\Bbb RP^\infty;\Bbb Z/4)$. Suppose $\beta$ generates $H^2(\Bbb RP^\infty;\Bbb Z/4)$, then we have $\beta\smile\beta\neq 0$ because $g$ is bijective on positive even dimensions (which means powers of $\beta$ generates even dimensional cohomology). Suppose $\alpha\smile\alpha=\beta\neq0$ ($H^2$ is $\Bbb Z/2$ so this is the only non-zero element), then $g(\beta^2)=g(\alpha^4)=g(\alpha)^4=0$. However, $g$ is bijective on positive even dimensions, so $g(\beta^2)\neq 0$, contradicting the preceding claim. Thus, $\alpha\smile\alpha=0$. $\Bbb RP^2$ inherits this trivial cup product.
Is induction like this logically rigorous?
Really it depends on what is happening along the edges (when $m$ or $n$ is $1$). If your induction argument requires for instance that $p(2,0)$ be equal to $\frac12$ in order for $p(2,1)$ to be equal to $\frac12$, then your argument is not valid. For instance $p(1,1) = \frac12$ and $p(m,n) = 5$ for all $(m,n)\ne (1,1)$ would be a counterexample.
Equivalence Relations of n|(x1-x2)
For equivalence relations, you have to prove that it is RST - reflexive, symmetric and transitive. In addition, it may help to consider modulo arithmetic: $${n\, | \,x_1-x_2} \iff {x_1 \equiv x_2 \pmod n}$$ Now you should be able to solve this. EDIT: Proof for transitivity at request. (Note that I've used $R$ to refer to the relation - personal preference) Transitive: $$aRb \iff n\, | \,a-b \iff {a \equiv b \pmod n}$$ $$bRc \iff n\, | \,b-c \iff {b \equiv c \pmod n}$$ $\iff a \equiv b \equiv c\pmod n$ $\iff a \equiv c\pmod n$ $\iff n\, | \,a-c$ $\iff aRc$ $\implies$ Transitive.
How is this inverse function calculated? (Laplace distribution)
$\DeclareMathOperator{sgn}{sgn}$ The function to be inverted is $$ \begin{align} F(x) &amp;= \int_{-\infty}^x \!\!f(u)\,\mathrm{d}u = \begin{cases} \frac12 \exp \left( \frac{x-\mu}{b} \right) &amp; \mbox{if }x &lt; \mu \\ 1-\frac12 \exp \left( -\frac{x-\mu}{b} \right) &amp; \mbox{if }x \geq \mu \end{cases} \\ &amp;= \tfrac{1}{2} + \tfrac{1}{2} \sgn(x-\mu) \left( 1-\exp \left( -\frac{|x-\mu|}{b} \right) \right) \\ &amp;= y \end{align} $$ with $b &gt; 0$. We start using the case distinctions. For $x \ge \mu$: $$ y = 1-\frac12 \exp \left( -\frac{x-\mu}{b} \right) \iff \\ \exp \left( -\frac{x-\mu}{b} \right) = 2(1-y) \iff \\ -\frac{x-\mu}{b} = \ln(2(1-y)) \iff \\ x = \mu - b \ln(2-2y) $$ For $x &lt; \mu$: $$ y = \frac12 \exp \left( \frac{x-\mu}{b} \right) \iff \\ \exp \left( \frac{x-\mu}{b} \right) = 2 y \iff \\ \frac{x-\mu}{b} = \ln(2y) \iff \\ x = \mu + b \ln(2y) $$ We can combine into $$ x = \mu - \sgn(x - \mu) b \ln(1 + \sgn(x - \mu) - \sgn(x - \mu) 2 y) $$ Comparing with Wikipedia: $$ F^{-1}(p) = \mu - b\,\sgn(p-0.5)\,\ln(1 - 2|p-0.5|) $$ The terms are equal, if $\sgn(x - \mu) = \sgn(y-0.5)$. For $x \ge \mu$ we had $$ 0 \le -b \ln(2-2y) \iff \\ 0 \ge \ln(2 - 2y) \iff \\ 2 - 2 y \le 1 \iff \\ 1 - y \le 1/2 \iff \\ -y \le -1/2 \iff \\ y \ge 1/2 $$ For $x &lt; \mu$ we had $$ 0 &gt; x - \mu = b \ln(2y) \iff \\ 0 &gt; \ln(2y) \iff \\ 2y &lt; 1 \iff \\ y &lt; 1/2 $$ So indeed $\sgn(x-\mu) = \sgn(y - 1/2)$.
$A\leq B \leq C \implies B\leq A^{1-\alpha} C^{\alpha}$ for $\alpha \in (0,1)$?
Try $A=1$, $B=2$, $C=3$ and $\alpha=\frac{1}{2}$
find the volume of the solid of intersection of the two spheres of radii a and b (with b $<$ a)
Your approach is fine. You are basically taking $x$ as your $z$ axis and hence in cylindrical coordinates $y^2 + z^2 = r^2$ and so you may want to write them as, $r^2+x^2 = a^2$ and $r^2 + (x-a)^2 = b^2$. Your first integral is fine but your second integral should be $\displaystyle \int_{a-b}^{\frac{2a^2 - b^2}{2a}} \pi \, \big(b^2 - (a-x)^2\big) \,dx = \frac{\pi b^6}{24a^3} - \frac{\pi b^4}{2a} + \frac{2 \pi b^3}{3}$ The lower bound is the minimum value of $x$ for the smaller sphere, which is $(a-b)$ and it is part of the intersection volume.
Can a quadrilateral whose sides (in some order) are in arithmetic progression have an inscribed circle?
For a quadrilateral with sides $a,b,c,d$ (in that order) to have an inscribed circle, a necessary${}^{\color{blue}{[1]}}$ condition is $a+c=b+d$. If $a,b,c,d$ form a non-trivial A.P, then it is impossible for that quadrilateral to have an inscribed circle. It turns out the condition is also sufficient${}^{\color{blue}{[2]}}$. Given four numbers $a, b, c, d$. If $a+c = b +d$, there are quadrilaterals with sides in that order and having an inscribed circle. In particular, this means given any AP $a, a+x, a+2x, a+3x$; there is a quadrilateral with sides $a, a+x, a+3x, a+2x$ (in that order)${}^{\color{blue}{[3]}}$. As an example, consider the right trapezoid $ABCD$ with vertices at $$A (1,2), B : (-2,2), C : ( -2, -2 ), D : (4,-2)$$ Its sides are $AB = 3, BC = 4, CD = 6, DA = 5$. They form an AP $3,4,5,6$ but not in order. One can verify that $x^2+y^2 = 2^2$ is an inscribed circle of this right trapezoid. In general, a quadrilateral which can be inscribed by a circle is known as tangential quadrilateral. Its wiki entry has other necessary and sufficient conditions for a quadrilateral to be tangential. Notes/References $\color{blue}{[1]}$ - The necessary condition is known as Pitot theom, first proved by Henri Pitot in 1725. $\color{blue}{[2]}$ - Pointed out by Jean Marie. The converse of Pitot's theorem was proved by Jakob Steiner in $1846$. $\color{blue}{[3]}$ - Pointed out by Aretino in comment.
Solving $\cos^2{\theta}-\sin{\theta} = 1$
Notice, we have $$\cos^2\theta-\sin\theta=1$$ $$1-\sin^2\theta-\sin\theta=1$$ $$-\sin^2\theta-\sin\theta=0$$ $$\sin^2\theta+\sin\theta=0$$ $$\sin\theta(\sin\theta+1)=0$$ $$\sin\theta=0\iff \theta=n(180^\circ)$$ Where, $n$ is any integer. But for the given interval $[0^\circ, 360^\circ]$, substituting $n=0, 1, 2$, we get $$ \theta=0^\circ, 180^\circ, 360^\circ$$ Now, $$\sin\theta+1=0$$ $$\sin\theta=-1\iff \theta=2n(180^\circ)-90^\circ$$ But for given interval $[0^\circ, 360^\circ]$, substituting $n=1$ we get $$\theta= 270^\circ$$ Hence, we have $$\color{red}{\theta}=\left\{\color{blue}{0^\circ, 180^\circ, 270^\circ, 360^\circ} \right\}$$
Let G be a group of order 2n with n being an odd integer. Prove that G has a normal subgroup of order n.
Hint. Let $P$ be a Sylow $2$-subgroup, and consider $N_G(P)$. What is $\operatorname{Aut}(P)$ like? What does that mean about $N_G(P)/C_G(P)$? Use this to show that a $2$-complement is normal.
Please show me the question related to Continuously differentiable function & smallest value & minimizer
Part a: The simplest solution is to say that because $K$ is compact (and connected), its continuous image must be compact (and connected), which means that $f(K)$ must be (a) closed and bounded (interval), which means that $f$ attains both a minimum and maximum over $K$. Part b: Let $\mathbf p$ be any point on the edge of $K$, that is, any point whose norm is $1$. Note that $\langle Df(\mathbf p),-\mathbf p\rangle$, which gives the directional derivative at $\mathbf p$ in the direction of the origin, is negative. This means that $f$ is deacreasing along the vector from $\mathbf p$ to the origin. By the continuity of $Df$ and the mean value theorem, we may show that there exists some $\delta &gt; 0$ so that $f((1-\delta)\mathbf p)&lt;f(\mathbf p)$. Therefore, no point $\mathbf p$ on the border of $K$ can be a minimizer of $f$.
Bounding rectangle of ellipse
D. Thomine’s comment beat me to the punch. The parameterization $C+(a\cos t,b\sin t)$ of an ellipse can be understood as the image of the unit circle $(\cos t,\sin t)$ under scaling and translation, and the key to understanding this part of the algorithm is to map everything back to the unit circle. So, let $C$ be the as-yet-unknown coordinates of the ellipse center, and let $P_1' = (x_1',y_1') = (x_1/a,y_1/b)-C$ and similarly for $P_2'$. (The unknown $C$’s will cancel and drop out of the calculations pretty quickly.) We then have $(-r_1,r_2)=\frac12(P_2'-P_1')$, i.e., half of the segment $P_1'P_2'$, and $\sqrt{r_1^2+r_2^2}$ is the length of this half-segment. Rotating this vector 90° produces $(r_2,r_1)$, which is the direction of a line through the center of the circle and the midpoint of $P_1'P_2'$. The interpretation of the two angles $\kappa$ and $\lambda$ becomes pretty straightforward with that: $\chi=\kappa\pm\lambda$ are the angles to $P_1'$ and $P_2'$ (with some ambiguity as to quadrant), which are then the correct inputs to $C+(a\cos\chi,b\sin\chi)$ to obtain $P_1$ and $P_2$. It’s also possible to solve for the center of the ellipse more directly: it must lie on one of the intersections of the congruent ellipses centered at $P_1$ and $P_2$. If you subtract the equation of one of these ellipses from the other, you get the equation of a line that passes through the two intersection points. The problem then reduces to the intersection of a line and an ellipse, which can be solved quite readily. The resulting expressions are much more complex than the ones in the algorithm in your code, though.
Why are the solutions for $\frac{4}{x(4-x)} \ge 1\;$ and $\;4 \ge x(4-x)\;$ different?
Other answers (and comments) have already explained the potential pitfalls of multiplying both sides of an inequality by a quantity whose sign is sometimes negative. Here is an alternative approach that might help: $$\begin{align} {4\over x(4-x)}\ge1 &amp;\iff{4\over x(4-x)}-1\ge0\\ &amp;\iff{4-x(4-x)\over x(4-x)}\ge0\\ &amp;\iff{4-4x+x^2\over x(4-x)}\ge0\\ &amp;\iff{(2-x)^2\over x(4-x)}\ge0\\ &amp;\iff x(4-x)\gt0\quad\text{(since }(2-x)^2\text{ is always non-negative)}\\ &amp;\iff0\lt x\lt4 \end{align}$$ Remark: In general the quadratic in the numerator will not be a perfect square, in which case the final steps are a bit more complicated (you can wind up with more than one interval where the inequality holds). But the problem here was concocted to have a simple answer.
Self adjointness of square root operator
It goes like this: If $A$ is a non-negative self-adjoint matrix, then by the Spectral Theorem, we have $$ A = UDU^{*}$$ where $U$ is a unitary matrix and $D$ is the diagonal matrix with the eigenvalues of $A$ as the diagonal entries. Since $A$ is non-negative these eigenvalues are also non-negative. Now let $ E = \sqrt{D}$ be the diagonal matrix with each diagonal entry being the square root of the corresponding diagonal entry in $D$ (possible as diagonal entries of $D$ are non-negative). Now clearly, $$ (UEU^{*}) ^{2} = A $$ whence $\sqrt{A} = UEU^{*}$ which by the spectral theorem again is self-adjoint and non-negative. Please note that non-negative matrix is a self-adjoint matrix such that $\langle x,Ax \rangle \geq 0$ for all $x \in \mathbb{R}^{n}$. So a non-negative matrix is always self-adjoint, by definition.
Exponential Distribution Unbiased Estimate of Coefficient of Variation?
Check out "On New Moment Estimation of Parameters of the Gamma Distribution using its Characterization," Ann. Inst. Statist. Math., 2002, by Tea-Yuan Hwang and Ping-Huang Huang, available online. From their work, I think you should try this estimator, where $T$ is the CV, $n$ is the sample size, $S_n$ is the sample standard deviation, and $\bar x$ is the sample mean: $$\hat {T}=\sqrt{\left( {{n+1} \over {n}} \right) \left( {S_n^2 \over \bar x^2} \right) }$$ $\hat T^2$ should be unbiased for the square of the CV. Not quite what you want, but an improvement over what you currently are using. They mention some exact results in another reference, but only for sample sizes of 3, 4, or 5.
Francis Galton's surname problem
For the first question, if the name is extinct after one generation, it is not available for the second, so the chance a name is gone after two generations is at least as great as the chance it is gone after one. For your second question, very few males have more than five or six male children.
The diameter of a specific 3-regular graph
Start from any vertex $v$. Construct sets $D_1$, $D_2$, ... in which $D_{k}$ contains all the vertices in $G$ that are at distance $k$ from $v$. If the diameter is at most $m$ then $D_{k}=\emptyset$ whenever $k\geq m+1$ because there are no vertices at distance $k$ (note that this would be true from any $v$). Basically, all we need to do is show that $D_{m+1}$ is not the empty set. Since $G$ is $3$-regular, you know that $D_1$ contains exactly 3 vertices. With the benefit of hindsight we can say $|D_1|=3\cdot 2^{0}$. Each vertex in $D_1$ has 2 neighbors which are not vertex $v$. These neighbors are either other vertices in $D_1$, or vertices in $D_2$. Thus, there are at most $2\cdot |D_1|=3\cdot 2^{1}$ vertices in $D_2$. I.e. $|D_2|\leq 3\cdot 2^{1}$. Similarly, each vertex in $D_2$ has 3 neighbors. One of those neighbors must be in $D_1$. At most 2 of those neighbors are in $D_{3}$. Thus we have that $|D_3|\leq 2|D_2|\leq 3\cdot 2^{2}$. Inductively, each vertex in $D_{k}$ has at least one neighbor in $D_{k-1}$ and hence at most 2 neighbors in $D_{k+1}$. Thus $|D_{k+1}|\leq 2|D_{k}|\leq 3\cdot 2^{k}$. Now, what is the most vertices $G$ can have if $|D_{m+1}|=0$? \begin{align*} |G| &amp;= 1 + |D_1|+|D_2| + ... + |D_{m}| \\ &amp;\leq 1 + 3 + 3\cdot 2^1 + ... + 3\cdot 2^{m-1} \\ &amp;=1 + 3\left(\sum_{k=0}^{m-1}2^{k}\right) \\ &amp;=1 + 3(2^{m}-1) \\ &amp;=3\cdot 2^{m} - 2. \end{align*} But, your graph has more vertices than that. So $|D_{m+1}|&gt;0$ and hence the diameter of the graph is at least $m+1$.
How to find x intercept?
Solve $x^3 = -1$. Put your equation into that form.
Confusion related to curse of dimensionality in k nearest neighbor
For the $k$ nearest neighbor rule to perform well, we want the neighbours to be representative of the population densities at the query point (the given value $x$ to be classified). Which is to say that the $k$ nearest neighbours should typically fall near that point $x$. We can expect that to happen in low dimensions: if a unit interval, for example, with 5000 points the 5 nearest neighbours would be in a neighborhood of length $0.001$, in average, which seems right; the 5000 points will cover decently our space, even when taken in groups of 5: we can expect that the 5 neighbours will be quite near $x$. But, say, in 6 dimensions, we cannot be so optimistic: 5000 points in this cube means that we have in average 5 points for each 6-cube of size length=0.31, so the 5 nearest neighbours for a given query point will not be, in average, very near to it.
Why aren't there 21 players in this tournament?
You can't assume that the first place player has won all his games. What you can do is compute the total number of games played by $n$ players and recognize that that represents the total score. Similarly, how many points did the bottom ten players score playing among themselves?
Can $xy=0$ be the image of an algebraic morphism $\mathbb A^2 \rightarrow \mathbb A^2$?
In your argument, it's not clear to me why $f_1f_2(s,t)$ must have finitely many zeros. Zero sets of nontrivial single variable polynomials are finite, but those of multivariate polynomials need not be. However, it's still true that this is not a possible image for $\mathbb A^2$, because an irreducible algebraic set cannot surject onto a reducible one. Intuitively, this is similar to the fact that the image of a connected topological space is connected. With this in mind, let's solve your problem by proving the more general statement (assuming $k$ to be algebraically closed throughout). Let $X=Z(I)$ for $I\subset k[x_1,\dots,x_m]$ and $Y=Z(J)$ for $J \subset k[y_1,\dots,y_m].$ Assume that $I$ and $J$ are radical. Recall that algebraic morphisms $f:X \rightarrow Y$ are in one-to-one correspondence with $k$-algebra homomorphisms $f^{\#}:k[y_1,\dots,y_m]/J \rightarrow k[x_1,\dotsm,x_m]/I$. Recall also that if $f$ is surjective then $\ker f^{\#}=\{0\}$. Now suppose that $X$ surjects onto $Y$, and that $I$ is prime and $J$ is not prime. $(*)$ The above discussion implies that $k[y_1,\dots, y_m]/J$ embeds into $k[x_1,\dots,x_n]/I$. Since $J$ is not prime, $k[y_1,\dots,y_m]/J$ is not a domain. Since $I$ is prime, $k[x_1,\dots,x_n]/I$ is a domain. Hence, this is a contradiction. For your problem, simply check that $0=I \subset k[s,t]$, and $(xy)=J \subset k[x,y]$ satisfy $(*)$, so there is no surjection $\mathbb A^2 \rightarrow Z(xy)$. Edit: here's a counterexample for $k$ not algebraically closed. Take $k=\mathbb Z_2$, and consider $f:k^2 \rightarrow k^2$ given by $(x,y) \mapsto (xy,x+1)$. Then $Z(xy)=\{(0,0),(0,1),(1,0)\}$, and the image of $f$ is precisely $Z(xy)$: $$(0,0) \mapsto (0,1),$$ $$(0,1) \mapsto(0,1),$$ $$(1,0) \mapsto (0,0),$$ $$(1,1) \mapsto (1,0).$$
Predicate Logic proof: $\forall x\in S\exists y\in S p(x,y)\to \exists y\in Sp(y,y)$
Let $S:=\Bbb Z$ and for all $(x,y)\in S^2$ and let $p(x,y)$ mean $x&gt;y$.
Inequality with five variables
Here is a full proof. Let us start the discussion for general $n$. Denote $S = \sum_{i=1}^n a_i$. Since by AM-GM, $S \geq n \sqrt[n]{a_1a_2...a_n}$, we have $$1+\frac{n(n-2)\sqrt[n]{a_1a_2...a_n}}{2S} \geq \frac{S}{S - (n-2)\sqrt[n]{a_1a_2...a_n}}$$ Hence a tighter claim is (simultaneously defining $L$ and $R$): $$L = \sum_{cyc}\frac{a_i}{a_i+a_{i+1}}\geq 1+\frac{n(n-2)\sqrt[n]{a_1a_2...a_n}}{2S} = R$$ and it suffices to show that one. We write $2 L \geq 2 R$ or $L \geq 2 R- L$ and add on both sides a term $$\sum_{cyc}\frac{a_{i+1}}{a_i+a_{i+1}}$$ which leaves us to show $$n = \sum_{cyc}\frac{a_i + a_{i+1}}{a_i+a_{i+1}}\geq 2+\frac{n(n-2)\sqrt[n]{a_1a_2...a_n}}{S} + \sum_{cyc}\frac{-a_i + a_{i+1}}{a_i+a_{i+1}}$$ or, in our final equivalent reformulation of the L-R claim above, $$ \sum_{cyc}\frac{-a_i + a_{i+1}}{a_i+a_{i+1}} \leq (n - 2) (1- \frac{n \sqrt[n]{a_1a_2...a_n}}{S} )$$ For general odd $n$ see the remark at the bottom. Here the task is to show $n=5$. Before doing so, we will first prove the following Lemma (required below), which is the above L-R-inequality for 3 variables (which is tighter than the original formulation, hence we cannot apply the proof for $n=3$ given above by Michael Rozenberg for the original formulation): $$ \frac{b-a}{b+a} + \frac{c-b}{c+b} + \frac{a-c}{a+c} \leq (1- \frac{3 \sqrt[3]{a\, b \, c}}{a + b+ c} )$$ This Lemma is, from the above discussion, just a re-formulation of the claim in $L$ and $R$ above, for 3 variables, i.e. $$ \frac{a}{b+a} + \frac{b}{c+b} + \frac{c}{a+c} \geq 1+\frac{3\sqrt[3]{a \, b \ c}}{2(a+b+c)}$$ By homogeneity, we can demand $abc=1$ and prove, under that restriction, $$ \frac{a}{b+a} + \frac{b}{c+b} + \frac{c}{a+c} \geq 1+\frac{3}{2(a+b+c)}$$ This reformulates into $$ \frac{a\; c}{a +b} + \frac{b\; a}{b +c} + \frac{c\; b}{c +a} \geq \frac{3}{2}$$ or equivalently, due to $abc=1$, $$ \frac{1}{b(a +b)} + \frac{1}{c(b +c)} + \frac{1}{a(c +a)} \geq \frac{3}{2}$$ which is known (2008 International Zhautykov Olympiad), for some proofs see here: http://artofproblemsolving.com/community/c6h183916p1010959 Hence the Lemma holds. For $n=5$, we rewrite the LHS of our above final reformulation by adding and subtracting terms: $$ \frac{b-a}{b+a} + \frac{c-b}{c+b} + \frac{d-c}{d+c} + \frac{e-d}{e+d} + \frac{a-e}{a+e} = \\ (\frac{b-a}{b+a} + \frac{c-b}{c+b} + \frac{a-c}{a+c}) + (\frac{c-a}{c+a}+\frac{d-c}{d+c} + \frac{a-d}{a+d}) + (\frac{d-a}{d+a}+ \frac{e-d}{e+d} + \frac{a-e}{a+e}) $$ This also holds for any cyclic shift in (abcde), so we can write $$ 5 (\frac{b-a}{b+a} + \frac{c-b}{c+b} + \frac{d-c}{d+c} + \frac{e-d}{e+d} + \frac{a-e}{a+e}) = \\ \sum_{cyc (abcde)} (\frac{b-a}{b+a} + \frac{c-b}{c+b} + \frac{a-c}{a+c}) + \sum_{cyc (abcde)}(\frac{c-a}{c+a}+\frac{d-c}{d+c} + \frac{a-d}{a+d}) + \sum_{cyc (abcde)} (\frac{d-a}{d+a}+ \frac{e-d}{e+d} + \frac{a-e}{a+e}) $$ Using our Lemma, it suffices to show (with $S = a +b+c+d+e$) $$ \sum_{cyc (abcde)} (1- \frac{3 \sqrt[3]{a\, b \, c}}{a + b+ c} ) + \sum_{cyc (abcde)}(1- \frac{3 \sqrt[3]{a\, c \, d}}{a + c+ d} ) + \sum_{cyc (abcde)}(1- \frac{3 \sqrt[3]{a\, d \, e}}{a + d+ e} ) \leq 15 (1- \frac{5 \sqrt[5]{a b c d e }}{S} ) $$ which is $$ \sum_{cyc (abcde)} (\frac{\sqrt[3]{a\, b \, c}}{a + b+ c} + \frac{\sqrt[3]{a\, c \, d}}{a + c+ d} + \frac{\sqrt[3]{a\, d \, e}}{a + d+ e} ) \geq 25 \frac{\sqrt[5]{a b c d e }}{S} $$ Using Cauchy-Schwarz leaves us with showing $$ \frac {(\sum_{cyc (abcde)} \sqrt[6]{a\, b \, c})^2}{\sum_{cyc (abcde)}(a + b+ c)} + \frac {(\sum_{cyc (abcde)} \sqrt[6]{a\, c \, d})^2}{\sum_{cyc (abcde)}(a + c+ d)} + \frac {(\sum_{cyc (abcde)} \sqrt[6]{a\, d \, e})^2}{\sum_{cyc (abcde)}(a + d+ e)} \geq 25 \frac{\sqrt[5]{a b c d e }}{S} $$ The denominators all equal $3S$, so this becomes $$ (\sum_{cyc (abcde)} \sqrt[6]{a\, b \, c})^2 + (\sum_{cyc (abcde)} \sqrt[6]{a\, c \, d})^2 + (\sum_{cyc (abcde)} \sqrt[6]{a\, d \, e})^2 \geq 75 \sqrt[5]{a b c d e } $$ Using AM-GM gives for the first term $$ (\sum_{cyc (abcde)} \sqrt[6]{a\, b \, c})^2 \geq ( 5 (\prod_{cyc (abcde)} \sqrt[6]{a\, b \, c} )^{1/5})^2 = 25 (\prod_{cyc (abcde)} ({a\, b \, c} ) )^{1/15} = 25 \sqrt[5]{a b c d e } $$ By the same procedure, the second and the third term on the LHS are likewise greater or equal than $25 \sqrt[5]{a b c d e }$. This concludes the proof. Remarks: the tighter $L-R$-claim used here is - for general $n$ - asked for in the problem given at Cyclic Inequality in n (at least 4) variables For general odd $n$, the above reformulation can be used again. For odd $n&gt;5$, take the method of adding and subtracting terms to form smaller sub-sums which are cyclically closed in a smaller number of variables, and apply previous results for smaller $n$ recursively.
How do you solve this equations where the unknown is to the power of the unknown?
Letting $x=e^t$, you rewrite $$e^{te^t}=7,$$ or $$te^t=\ln(7),$$ which is solved by means of the Lambert function: $$x=e^{W((\ln(7))}=\frac{\ln(7)}{W(\ln(7))}.$$ There is no better analytical way, I am afraid.
Linear transformation given equation of plane in $\mathbb R^3$ and line in $\mathbb R^2$
You know that$$T(x,y,z)=(a_{11}x+a_{12}y+a_{13}x,a_{21}x+a_{22}y+a_{23}z),$$for some $a_{11},a_{12},a_{13},a_{21},a_{22},a_{23}\in\mathbb R$. You want them to be such that $T(1,0,0)$, $T(0,1,0)$, and $T(0,0,1)$ all belong to the given line. This means that$$\left\{\begin{array}{l}a_{11}-2a_{21}=2\\a_{12}-2a_{22}=2\\a_{13}-2a_{23}=2.\end{array}\right.$$So, the solution is:$$T(x,y,z)=\bigl((2a+2)x+(2b+2)y+(2c+2)z,ax+by+cz\bigr),$$with $a,b,c\in\mathbb R$.
Generalizing the Prouhet–Tarry–Escott problem: Large collections of integer tuples with equal sums of $0$th, ..., $m$th powers
Mathematician, "Chen Shuwen" has givn solution for, $m=4$ &amp; is shown below. $m=1,2,3,4$ $(401,521,641,881,911)^m=(431,461,701,821,941)^m$ The link to his web page's is given below: http://eslpower.org
Tiling a rectangle with a single polyomino
I can think of a simple example where this is not true, at least in the case where you decompose a polyomino into two polyominos with different sizes. Start with a $4\times5$ rectangle. Obviously you can tile a bigger rectangle (of compatible size) with such object. Now decompose it into two polyominos in the following way: take a cross (three rows, one square in the first row, three in the second, one in the third), and cut it out from the $4\times5$ rectangle, such that the top of the cross is in the middle of the long side. Neither the cross, nor the remaining figure can be used to tile any rectangle. I assume that you can even increase the size of the original polyomino and the size of the cross, so that the area of the cross is half of the area of the original polyomino, in case you want to decompose in equal size parts.
Is this true that $N_G(H) \subseteq N_G(H \cap K)$ if not what is the counter-example?
It's not true. Consider the 2-Sylow subgroup $V\trianglelefteq A_4$. Let $G=A_4$, $H=V$ and let $K$ be any 2 element subgroup of $V$. Then $N_G(H)=G$ since $V\trianglelefteq G$, but $N_G(H\cap K)=N_G(K)=V$, which is smaller. ($G/V$ permutes those three 2 element subgroups cyclically.)
integration by parts transforming a vector integral to vector times divergence?
I found an answer in Griffiths' Introduction to electrodynamics (problem 5.7) which poses a problem of relating the volume integral of $\mathbf{J}$ to the dipole moment, and hints that this can be done by expanding $$\int \boldsymbol{\nabla} \cdot ( x \mathbf{J} ) d^3 x.$$ That expansion is $$\int \boldsymbol{\nabla} \cdot ( x \mathbf{J} ) d^3 x= \int (\boldsymbol{\nabla} x \cdot \mathbf{J}) d^3 x+ \int x (\boldsymbol{\nabla} \cdot \mathbf{J}) d^3 x.$$ Doing the same for the other coordinates and summing gives $$\sum_{i = 1}^3 \mathbf{e}_i \int \boldsymbol{\nabla} \cdot ( x_i \mathbf{J} ) d^3 x=\int \mathbf{J} d^3 x+ \int \mathbf{x} (\boldsymbol{\nabla} \cdot \mathbf{J}) d^3 x.$$ I think the boundary condition argument would be to transform the left hand side using the divergence theorem $$\int \mathbf{J} d^3 x+ \int \mathbf{x} (\boldsymbol{\nabla} \cdot \mathbf{J}) d^3 x=\sum_{i = 1}^3 \mathbf{e}_i \int_{S} ( x_i \mathbf{J} ) \cdot \hat{\mathbf{n}} dS$$ and then argue that this is zero for localized-enough currents by taking this surface to infinity, where $\mathbf{J}$ is zero. The end result is the relation that Jackson calls "integration by parts".
Why are two characters of a commutative Banach algebra with the same kernel equal?
Suppose $a$ is an element of the algebra, and $\chi_1(a) = \lambda$. Then $a - \lambda \mathbf{1} \in \ker \chi_1 = \ker \chi_2$ so $$ 0 = \chi_2(a - \lambda \mathbf{1}) = \chi_2(a) - \lambda $$
Why must fractals be self-referential?
how that definition implies the self-similarity property we observe. It does not. Fractals, defined as sets with $\operatorname{H-dim}&lt;\operatorname{T-dim}$, have no intrinsic reason to be self-similar. That we tend to think of fractals as self-similar is a kind of observational bias. We can only imagine sets (and maps) with a finite (and pretty low at that) number of different features. Our visual field (either physical or mental) has limited resolution. A "generic" subset of real line is something we can never hope to comprehend in its entirety. To augment our limited vision, we can zoom in on a set and see what it looks like on some scale. But there are infinitely many scales. So we give up and say: let's think of the sets that have the same pattern on every scale, or perhaps have a finite number of patterns that appear on different scales in some regular way. What else is there to do? By the way, the smooth, non-fractal, objects also have such paucity of features: on all sufficiently small scales, a smooth surface looks just like a plane. Conclusion: don't try to infer self-similarity from inequality of dimensions. If you want a definition that makes self-similarity transparent, look at the definition of self-similar set.
Cardinality and bijections
Some extended HINTS: Every real number $x$ can be written uniquely in the form $$x=n+\sum_{k\ge 1}\frac{d_k}{10^k}\;,$$ where $n$ is an integer, each $d_k\in\{0,1,\ldots,9\}$, and $\{k\in\Bbb Z^+:d_k\ne 0\}$ is infinite. (If $x&gt;0$, this is just the non-terminating decimal expansion of $x$. It represents $0$ as $-1+0.\overline9$, and you should think about how it represents negative $x$.) The map $$\Bbb R\to\Bbb N^\Bbb N:x\mapsto\langle n,d_1,d_2,d_3,\ldots\rangle$$ is therefore an injection. The slick way to get an injection from $\Bbb N^{\Bbb N}$ to $\Bbb R$ is to use continued fractions. The continued fractions $[a_1,a_2,a_3,\ldots]$ with all $a_k&gt;0$ are precisely the irrational numbers in $(0,1)$, each of which has a unique continued fraction expansion, so the map that takes the sequence $\langle n_0,n_1,n_2,\ldots\rangle\in\Bbb N^{\Bbb N}$ to the continued fraction $[n_0+1,n_1+1,n_2+1,\ldots]$ is an injection. (My $\Bbb N$ includes $0$, so I have to add $1$ to each $n_k$ to ensure that I get a positive integer.) There are other ways to do it. Given $\sigma=\langle n_k:k\in\Bbb N\rangle$, let $m_0=n_0$ and $m_{k+1}=m_k+n_{k+1}+1$ for each $k\in\Bbb N$, and let $\widehat\sigma=\langle m_k:k\in\Bbb N\rangle$; the map $\sigma\mapsto\widehat\sigma$ is injective, and $\widehat\sigma$ is a strictly increasing sequence. Now let $$x_\sigma=\sum_{k\in\Bbb N}\frac1{3^{m_k}}\;,$$ and show that the map $\sigma\mapsto x_\sigma$ is injective. This isn’t hard if you remember that $$\sum_{k&gt;m}\frac1{3^k}=\frac{\frac1{3^{m+1}}}{1-\frac13}=\frac1{2\cdot3^m}&lt;\frac1{3^m}\;.$$ Once you have $\Bbb R\approx\Bbb N^{\Bbb N}$, show that $\Bbb R^{\Bbb N}\approx\left(\Bbb N^{\Bbb N}\right)^{\Bbb N}\approx\Bbb N^{\Bbb N\times\Bbb N}$, and use show that if $B\approx C$, then $A^B\approx A^C$; this will give you the second part. For the last part, one injection can come from the observation that constant functions are continuous. The other is most easily derived from the fact that if $f,g:\Bbb R\to\Bbb R$ are continuous, and $f\upharpoonright\Bbb Q=g\upharpoonright\Bbb Q$, then $f=g$.
How does a piecewise function work when multiple conditions are met? Does it default to the first (like a switch-case) or is the statement invalid?
For your particular function, when $x &gt; 5$ and $x$ is even, $f(x)$ is generally not well-defined, which means that $x$ maps to multiple things. This can be resolved if $g(x) = h(x)$ when $x&gt;5$ and $x$ is even. So if you mean "valid" as in well-defined, it depends whether or not the previous condition is met. A function must be well-defined (as per definition of a function), but there are examples of mappings which aren't functions. For instance, consider the $n$-th root of a real number $a$. For $n&gt;1$ and $a \neq 0$, the amount of numbers that satisfy $x^n = a$ is more than one. Hence, we mathematicians avoid this problem by only using a specific "branch cut". The most common example is the principle square root, or $\sqrt{\hspace{1em}}$, which is well-defined and hence, a function.
How to detect inflection points without function expression in a live time series scenario?
OK, VERY IMPORTANT: what you've circled are most definitely NOT inflection points. You have circled local minima. There are many algorithms for finding those that are very efficient. One of the easiest things you could do, I think, is simply multiply all the $y$ values by $-1$ and then use a peak-finding routine. Peak-finders are very common and not difficult to find. The trick with peak-finders is to choose the window size. If you choose it too large, your algorithms gets "ham-fisted" and won't detect peaks in a smaller region very well. On the other hand, your data oscillates enough to where setting the window size too small will find far too many peaks. Another thing you could try is first filtering your data before running it through the peak-finder. I find a median filter works well if you have a lot of outliers. Your example data doesn't look like it has too many outliers, so maybe just a regular low-pass filter would do the trick. So that's my recommendation: use a low-pass filter (plot the filtered on top of the unfiltered to make sure your cutoff frequency is set reasonably) then use the peak-finder on $-y.$
Manifold of fixed points
I'll write the action as a morphism of groups $$ \begin{array}{ccc} G &amp; \longrightarrow &amp; Diff(M)\\ g &amp; \longmapsto &amp; \psi_g \end{array}. $$ If $x\in M^G$ we have that $d_x\psi_g\in End(T_xM)$, so we get a linear action of $G$ on $T_xM$. If $\rho$ is a $G$-invariant metric (it exists if you take $G$ compact, for example) on $M$, then the exponential map $$\exp^\rho_x:T_xM\rightarrow M$$ is equivariant with respect to the two actions above: $\exp^\rho_x(d_x\psi_g(v))=\psi_g(\exp_x^\rho(v))$. Recall that $\exp_x^\rho$ gives a diffeomorphism between a neighbourhood $U_0$ of $0\in T_xM$ and a neighbourhood $U_x$ of $x\in M$. Consider $U_0^G$ and $U_x^G$ the corresponding sets of fixed points with respect to the actions above. Since $\exp^\rho_x$ is equivariant it also defines a diffeomorphism between $U_0^G$ and $U_x^G$. Finally, since the action of $G$ on $T_xM$ is linear, its fixed points form a linear subspace, and then $U_0^G$ is an open subset of a vector space. Summing it all up, $(\exp_x^\rho)^{-1}:U_x^G\rightarrow U_0^G$ is a chart about $x\in M^G$.
A real analysis problem on the integral inequality.
Well, as $\alpha &lt; \beta$ you have $-\alpha = -\beta + \underbrace{\beta - \alpha}_{&gt;0}$, hence: $$\phi(x) x^2 e^{-\alpha x^2} = \phi (x) x^2 e^{-\beta x^2}\cdot \underbrace{e^{(\beta -\alpha)x^2}}_{\geq 1} \geq \phi (x) x^2 e^{-\beta x^2}\; .$$ Therefore your inequality with a universal constant $C_0=C_0(\alpha ,\beta)$ is hardly true for every bounded measurable function. To prove this, let $n\in \mathbb{N}$ and: $$\phi(x) = \phi_n(x)= \frac{1}{x}\cdot \chi_{[n,2n[}(x) = \begin{cases} \frac{1}{x} &amp;\text{, if } n\leq x &lt; 2n \\ 0 &amp;\text{, otherwise}\end{cases}$$ which is measurable and bounded (because $0\leq \phi_n(x)\leq 1$); using such a test function we can explicitly compute: $$\begin{split}\int_0^\infty \phi_n(x) x^2\ e^{-\alpha x^2}\ \text{d} x &amp;= \int_n^{2n} x\ e^{-\alpha x^2}\ \text{d} x\\ &amp;=\frac{1}{2\alpha}\ e^{-\alpha n^2}\ (1-e^{-3\alpha n^2})\\ \int_0^\infty \phi_n(x) x^2\ e^{-\beta x^2}\ \text{d} x &amp;= \frac{1}{2\beta}\ e^{-\beta n^2}\ (1-e^{-3\beta n^2})\; . \end{split}$$ Therefore the ratio: $$\frac{\int_0^\infty \phi_n(x) x^2\ e^{-\alpha x^2}\ \text{d} x}{\int_0^\infty \phi_n(x) x^2\ e^{-\beta x^2}\ \text{d} x} = \frac{\beta}{\alpha}\ e^{(\beta - \alpha)n^2} \frac{1-e^{-3\alpha n^2}}{1-e^{-3\beta n^2}}$$ approaches $+\infty$ when $n\to +\infty$ (because $\beta-\alpha &gt;0$); this fact implies that no universal constant $C_0$ can make your inequality work for every bounded measurable function $\phi$.
Integral over X as supremum of integrals over finite subsets of X.
You can work directly from the definition. Since $f$ is integrable, we have a sequence $s_n$ of simple measurable functions such that $0 \le s_n \le f$ and $\int s_n \to \int f$. Each $s_n$ has the form $s_n = \sum \alpha_k 1_{A_k}$, where $\alpha_k &gt; 0$ and the$A_k$ are measurable, $\mu A_k &lt; \infty$ and, without loss of generality, disjoint. Let $E_n = \cup A_k$. Then $\mu E_n &lt; \infty$. Then we have $s_n \le f \cdot1_{E_n}$, and hence $\int s_n \le \int f \cdot1_{E_n} = \int_{E_n} f \le \int f$. The result follows, since $\int s_n \to \int f$. (Note: The fact that $\mu A_k &lt; \infty$ follows from $\int s_n = \sum_k \alpha_k \mu A_k \le \int f &lt; \infty$, and $\alpha_k &gt;0$.) Addendum: We have $\sup \{ \int_E f | \mu E &lt; \infty \} \ge \int_{E_n} f$. Since $\int_{E_n} f \to \int f$, we have $\sup \{ \int_E f | \mu E &lt; \infty \} \ge \int f$, as required.
Bayesian and frequency tail estimation.
Let's say we were just trying to estimate $\theta$ rather that $1-F(a\mid \theta)$ Then the relation between these two quantities already depends on a number of factors. First off, there is the prior, which could conceivably cause a significant deviation in either direction. Let's say you're using a flat prior so that your maximum posterior is the same as your maximum likelihood. Now there's something else to worry about. The plug-in is going to be the mode of your posterior distribution whereas the Bayesian is going to be the average. How these relate depends on skewness and such (consider that the mode of an exponential is always zero, regardless of its mean). So let's simplify further and say $\pi(\theta\mid x)$ is symmetric (maybe it's a large sample so it's nearly Gaussian). Okay, now they line up. But now say we switch to estimating $1-F(a\mid \theta).$ We want to know how the mean of this function of $\theta$ relates to the function with the mean plugged it. This basically depends on the convexity of the function. If the function is convex, Jensen's inequality tells you that the mean of the function is larger than the function of the mean. If it's concave, it's the opposite. So it really depends on how $1-F(a\mid \theta)$ is shaped. If, for instance, $\theta$ is a location parameter for a normal, then $1-F(a\mid\theta)$ will be convex for $\theta&lt;a$ and concave for $\theta&gt;a.$ Again a mixed bag. Being that all these factors can push the difference either way, I can't tell without more information about your specific situation why the Bayesian version is larger. My best guess is perhaps you're taking $a$ large, and your typical values of $\theta$ (which is a location parameter) are smaller than that so $1-F(a\mid \theta)$ is convex around there. But I don't even know if $\theta$ is a location parameter for you, so I'm just guessing.
How to prove that $\sin x$ is a lipschitz continuous function on the real line?
Use this identity (see here): $$\sin(x)-\sin(y)=2\cos((x+y)/2)\sin((x-y)/2).$$ Hence $$|\sin(x)-\sin(y)|\leq 2|\sin((x-y)/2)|\leq |x-y|.$$ where we used the inequalities $|\cos t|\leq 1$ and $|\sin t|\leq |t|$.
Functional Equation (no. of solutions): $f(x+y) + f(x-y) = 2f(x) + 2f(y)$
$f(0)=0$ By choosing $x=y=0$. Also by choosing $x=y$ we can see $f(2x)=4f(x)$. Moreover choose $x=0$ and use $f(0)=0$, then you have $f(y)=f(-y)$. Now by strong induction one can prove that $f(nx)=n^2f(x)$. Then choose $x$ as $\frac{x}{n}$ in preceding equation, then you get: $$ f(x)={n^2}f(\frac{x}{n})\rightarrow\frac{1}{n^2}f(x)=f(\frac{x}{n}) $$ And finally we have: $$ f(\frac{mx}{n})=\frac{1}{n^2}f(mx)=\frac{m^2}{n^2}f(x). $$ Now assume that $f(1)=a$ for some $a\in\mathbb{R}$, then you get $$ f(x)=ax^2 $$ for all rational $x=\frac{m}{n}$.
Number of ways to arrange $a,a,b,b,c,d$ in the grid such that no row is empty
You have not accounted for the empty spaces in the grid. There are $\binom{11}{2}$ ways to choose two of the eleven positions for the $a$s, $\binom{9}{2}$ ways to choose two of the remaining nine positions for the $b$s, $\binom{7}{1}$ ways to choose one of the remaining positions for the $c$, and $\binom{6}{1}$ ways to choose one of the remaining six positions for the $d$. Hence, you should have obtained $$\binom{11}{2}\binom{9}{2}\binom{7}{1}\binom{6}{1} = \frac{11!}{2!9!} \cdot \frac{9!}{2!7!} \cdot \frac{7!}{1!6!} \cdot \frac{6!}{1!5!} = \frac{11!}{2!2!1!1!5!}$$ distinguishable arrangements of two $a$s, two $b$s, one $c$, one $d$ in the grid without restriction. The factors of $2!$, $2!$, $1!$, $1!$, and $5!$ in the denominator account, respectively for the number of ways two $a$s, two $b$s, one $c$, one $d$, and five empty squares can be permuted among themselves without producing an arrangement that is distinguishable from the given arrangement. To see why, think of the problem as arranging two $a$s, two $b$s, one $c$, one $d$, and five $e$s in the eleven squares of the grid, where $e$ represents an empty square. One empty row: The row can have either three squares or one square. A row with three squares is empty: There are $\binom{3}{1}$ ways to choose which of the rows with three squares is empty, $\binom{8}{2}$ ways to select two of the remaining eight positions for the $a$s, $\binom{6}{2}$ ways to choose two of the remaining six positions for the $b$s, $\binom{4}{1}$ ways to choose one of the remaining four positions for the $c$, and $\binom{3}{1}$ ways to choose one the remaining three positions for the $d$. Hence, there are $$\binom{3}{1}\binom{8}{2}\binom{6}{2}\binom{4}{1}\binom{3}{1}$$ such arrangements. A row with one square is empty: There are $\binom{2}{1}$ ways to choose which of the two rows with one square is empty, $\binom{10}{2}$ ways to select two of the remaining ten positions for the $a$s, $\binom{8}{2}$ ways to choose two of the remaining eight positions for the $b$s, $\binom{6}{1}$ ways to choose one of the remaining six positions for the $c$, and $\binom{5}{1}$ ways to choose one the remaining five positions for the $d$. Hence, there are $$\binom{2}{1}\binom{10}{2}\binom{8}{2}\binom{6}{1}\binom{5}{1}$$ such arrangements. Two empty rows: If two rows with three squares were empty, then there would not be a sufficient number of squares left in the grid for the six letters. Hence, we are left with two possibilities. Either a row with three squares and a row with one square are empty or both rows with one square are empty. A row with three squares and a row with one square are empty: There are $\binom{3}{1}$ ways to choose which of the rows with three squares is empty, $\binom{2}{1}$ ways to choose which of the rows with two square are empty, $\binom{7}{2}$ ways to select two of the remaining seven positions for the $a$s, $\binom{5}{2}$ ways to choose two of the remaining five positions for the $b$s, $\binom{3}{1}$ ways to choose one of the remaining three positions for the $c$, and $\binom{2}{1}$ ways to choose one the remaining two positions for the $d$. Hence, there are $$\binom{3}{1}\binom{2}{1}\binom{7}{2}\binom{5}{2}\binom{3}{1}\binom{2}{1}$$ such arrangements. Both rows with one square are empty: There are $\binom{9}{2}$ ways to select two of the remaining nine positions for the $a$s, $\binom{7}{2}$ ways to choose two of the remaining seven positions for the $b$s, $\binom{5}{1}$ ways to choose one of the remaining five positions for the $c$, and $\binom{4}{1}$ ways to choose one the remaining four positions for the $d$. Hence, there are $$\binom{9}{2}\binom{7}{2}\binom{5}{1}\binom{4}{1}$$ such arrangements. Three empty rows: Since at most five squares can be left empty, the only way this can occur is if both of the rows with one square and one of the rows with three square are empty. There are $\binom{3}{1}$ ways to select which of the rows with three squares is empty, $\binom{6}{2}$ ways to select two of the remaining six positions for the $a$s, $\binom{4}{2}$ ways to choose two of the remaining four positions for the $b$s, $\binom{2}{1}$ ways to choose one of the remaining two positions for the $c$, and $\binom{1}{1}$ ways to choose the remaining position for the $d$. Hence, there are $$\binom{3}{1}\binom{6}{2}\binom{4}{2}\binom{2}{1}\binom{1}{1}$$ such arrangements. It is not possible to have more than three empty rows. By the Inclusion-Exclusion Principle, the number of admissible arrangements is $$\binom{11}{2}\binom{9}{2}\binom{7}{1}\binom{6}{1} - \binom{3}{1}\binom{8}{2}\binom{6}{2}\binom{4}{1}\binom{3}{1} - \binom{2}{1}\binom{10}{2}\binom{8}{2}\binom{6}{1}\binom{5}{1} + \binom{3}{1}\binom{2}{1}\binom{7}{2}\binom{5}{2}\binom{3}{1}\binom{2}{1} + \binom{9}{2}\binom{7}{2}\binom{5}{1}\binom{4}{1} - \binom{3}{1}\binom{6}{2}\binom{4}{2}\binom{2}{1}\binom{1}{1}$$
Proving Multivariate Limit with Squeeze Theorem
Hint: Observe \begin{align} |x^2-6y^2| \leq |x+\sqrt{6}y||x-\sqrt{6}y| \leq (|x|+3|y|)|x-\sqrt{6}y|. \end{align}