title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
What is the number of automorphisms (including identity) for permutation group $S_3$ on 3 letters?
What you did doesn't really work: you are listing elements of $S_3$, instead of automorphisms of $S_3$. While there is a natural isomorphism between the two, I suspect that if you are confusing elements and automorphisms, you are not expected to know this yet. Try this: If $f\colon S_3\to S_3$ is a group homomorphism, and you know what $f(12)$ and $f(123)$ are, then you know $f(\sigma)$ for all $\sigma\in S_3$. If $f,g\colon S_3\to S_3$ are group homomorphisms and have the same values at $(12)$ and at $(123)$, then $f=g$. $f(12)$ must be an element of order $2$. So there are, at most, three possibilities. $f(123)$ must be an element of order $3$, so there are at most two possibilities. Conclude that there are at most six automorphisms of $S_3$. Exhibit six different automorphisms of $S_3$ (Hint: Conjugation...)
(Another) linear ODE first order - solving by Laplace transformation?
Well, at $t=0$, $\epsilon=0$ so $\sigma=E_1 \epsilon + \eta \dot{\epsilon}$ gives you $\dot{\epsilon}_0 = \sigma_0 / \eta$, not $\dot{\epsilon}_0=0$. ETA: Consider the problem. You have to understand that the unknown function is $\epsilon(t)$. You are given $\sigma(t)=\sigma_0$, so your differential equation is $$ \eta \dot{\epsilon} + E_1 \epsilon = \sigma_0 $$ Notice that this is a first-order ODE, so you need only one initial condition - which is given to you as $\epsilon(0)=0$. Now take Laplace transforms of both sides, we'll call $L(\epsilon(t))$ as $F(s)$. Recall the formula for Laplace transform of a derivative $$ L\left(\frac{d\epsilon}{dt}\right) = s F(s) - \epsilon(0^-) $$ The Laplace transform of the RHS, a constant is just $\dfrac{\sigma_0}{s}$. So you have $$ \eta s F(s) -\eta \epsilon(0^-) + E_1 F(s) = \frac{\sigma_0}{s} $$ Since the initial $\epsilon$ is 0, on rearranging you'll have $$ F(s)= \frac{\sigma_0}{s(E_1 + \eta s)} $$ Take partial fractions and you'll immediately see that the first time gives you a constant, and the second term the negative exponential. $$ \epsilon(t) = \frac{\sigma_0 }{E_1} \left(1 - \exp \left( \frac{-E_1 \; t}{\eta} \right) \right) $$ Now you can go back and check the initial value of $\dot{\epsilon}$. $$ \left. \dot{\epsilon} \right |_{t=0} = \sigma_0 / \eta $$ But you did not really require it to solve the problem - because it is a first order ODE in $\epsilon$
Is there an opposite of proof by contradiction?
This is called circular reasoning and it's not sound because you're assuming the conclusion to begin with.
Monic polynomial reducible over rationals
That is the original form of Gauss's Lemma. A proof follows easily from said modern form GL. Let $\,Q = \dfrac{f}c,\ R = \dfrac{g}d,\, $ $\,f,g\,$ primitive, $\,c,d>0.\,$ $\, QR = fg/(cd) = h\in\Bbb Z[x]\,$ so $\,fg = cd\,h.\,$ By GL $\,f,g\,$ primitive $\Rightarrow$ $fg$ primitive, so $\,c,d = 1.\,$ Thus $\,Q =f,\ R = g\,$ are both $\in \Bbb Z[x].$ Remark $\ $ More generally the following is true. If $f,g\in\Bbb Q[x]$ and $\,fg\in\Bbb Z[x]\,$ then $\,f_i g_j\in \Bbb Z$ for all coefficients $f_i,g_j$ of $f,g.\,$ In particular, if $f$ is monic, taking $f_i = 1$ = leading coefficient, shows that every $\,g_j\in \Bbb Z,\,$ and, similarly every $f_i\in \Bbb Z,\,$ yielding the above special case. This general form is true over any integrally-closed domain (it is equivalent to integral closure). This is sometimes called the Gauss-Kronecker Lemma, or Dedekind's Prague Theorem. Dedekind discovered a form applying to algebraic integers after studying a divisor-theoretic form that appeared in Kronecker's seminal work on divisor theory. For much further discussion, both historical and mathematical, see Harold Edwards, Divisor Theory. Below is Gauss's original form of Gauss Lemma, from Art. $42$ of Disq. Arith. Notice that is of the form mentioned above, not the reformulated modern form using primitivity or content.
Evaluate area of the field defined by $\left (\frac{x^2}{4}+y^2 \right )^2=x^2+y^2$
Polar coordinates are a good choice in order to find the area enclosed by the peanut-shaped region given by your equation: By setting $x=\rho\cos\theta$ and $y=\rho\sin\theta $ we have: $$ A = 32\int_{-\pi}^{+\pi}\frac{d\theta}{(5-3\cos(2\theta))^2} = 5\,\pi.$$ Also notice that your peanut-curve is an epitrochoid.
Height/Radius ratio for maximum volume cylinder of given surface area
Let $r$ be the radius & $h$ be the height of the cylinder having its total surface area $A$ (constant) since cylindrical container is closed at the top (circular) then its surface area (constant\fixed) is given as $$=\text{(area of lateral surface)}+2\text{(area of circular top/bottom)}$$$$A=2\pi rh+2\pi r^2$$ $$h=\frac{A-2\pi r^2}{2\pi r}=\frac{A}{2\pi r}-r\tag 1$$ Now, the volume of the cylinder $$V=\pi r^2h=\pi r^2\left(\frac{A}{2\pi r}-r\right)=\frac{A}{2}r-\pi r^3$$ differentiating $V$ w.r.t. $r$, we get $$\frac{dV}{dr}=\frac{A}{2}-3\pi r^2$$ $$\frac{d^2V}{dr^2}=-6\pi r<0\ \ (\forall\ \ r>0)$$ Hence, the volume is maximum, now, setting $\frac{dV}{dr}=0$ for maxima $$\frac{A}{2}-3\pi r^2=0\implies \color{red}{r}=\color{red}{\sqrt{\frac{A}{6\pi}}}$$ Setting value of $r$ in (1), we get $$\color{red}{h}=\frac{A}{2\pi\sqrt{\frac{A}{6\pi}}}-\sqrt{\frac{A}{6\pi}}=\left(\sqrt{\frac{3}{2}}-\frac{1}{\sqrt 6}\right)\sqrt{\frac{A}{\pi}}=\color{red}{\sqrt{\frac{2A}{3\pi}}}$$ Hence, the ratio of height $(h)$ to the radius $(r)$ is given as $$\color{}{\frac{h}{r}}=\frac{\sqrt{\frac{2A}{3\pi}}}{\sqrt{\frac{A}{6\pi}}}=\sqrt{\frac{12\pi A}{3\pi A}}=2$$ $$\bbox[5pt, border:2.5pt solid #FF0000]{\color{blue}{\frac{h}{r}=2}}$$
proving that $2^{m-1}$ has reminder $1$ when divided by $m$
$2^{2p}=4^p=3m+1\equiv 1 \pmod m$ so the result follows if $2p\mid m-1$. Since $m$ is odd $2\mid m-1$, and by Fermat's Little Theorem $p\mid 4^p-4=3(m-1)$. Since $p>3$ is prime we must have $p\mid m-1$.
System of irrational equations
Squaring yields two polynomial equations in $a$ and $c$, $$ a^6 + a^4c^2 - 128a^4 - 128a^2c^2 + 4096c^2=0, $$ and $$ - 81a^2c + 128ac^2 - 5184a + 5184c=0. $$ Over the complex numbers all solutions can be computed by using Groebner bases. Among them the positive real solutions are $(a,b,c)=(0,0,0)$, $(a,b,c)=(24/\sqrt{5},30/\sqrt{5}, 18/\sqrt{5})$. All other solutions are either real with one of the values $a,b,c$ negative, or non-real solutions. We have $b^2=a^2+c^2$, which is, up to a factor here $30^2=24^2+18^2$, i.e., $5^2=4^2+3^2$.
A tricky elementary number theory question - least common multiple
$$6=1+2+3$$ is a counterexample. Note that your problem is equivalent to $\sum \frac{m_i}{M}=1$, which reminded me of the well known relation $$1=\frac{1}{2}+\frac{1}{3}+\frac{1}{6}$$ Joffan showed below how to iterate this. Another counterexample Here is another way to construct a counterexample: Pick $n$ to be a perfect number. Then $$n=\sum_{d|n, d<n} d$$ and the lcm of the elements on the right hand side is $n$. The first perfect number is $6$ which leads to the same example as above, so let us look at the next one: $$28=1+2+4+7+14$$
Problem 5 of chapter 6 of Evans PDE 1st.
If $Lv\leq 0$ for $\lambda$ large enough, the maximum principle implies that $\|v\|_{L^\infty(U)} \leq \|v\|_{L^\infty(\partial U)}$. Therefore, $$ \big\||Du|^2\big\|_{L^\infty(U)} \leq \|v\|_{L^\infty(U)} \leq \big\||Du|^2+\lambda u^2\big\|_{L^\infty(U)} \leq \big\||Du|^2+\lambda u^2\big\|_{L^\infty(\partial U)} \leq \big| \| |Du|^2 \big\|_{L^\infty(\partial U)} + \lambda \big\| |u|^2 \big\|_{L^\infty(\partial U)} \leq C^2(\|Du\|_{L^\infty(\partial U)}+\|u\|_{L^\infty(\partial U)})^2. $$
Finding a function that satisfies a condition
If what you mean is $$\lim_{h\to0} |f (x + h)- f(x-h)| = 0$$ while $$\lim_{h\to0} |f (x + h)- f(x)| \neq 0$$ Take: $$f(0)=1, f(x)=0 \ x \neq 0$$
Calculation of the flux of a vector field through a part of spherical surface
There is a part of the spherical surface $S$ you can complete the square: \begin{align} x^2+y^2+z^2-2az+\overbrace{a^2+a^2}^{\text{complete sqr}} &=3a^2 \\ x^2+y^2+(z-a)^2&=4a^2 \end{align} That is above $xy$-plane ($z=0$) and the bottom of it is a disk in the $xy$-plane $$x^2+y^2+(0-a^2)=4a^2 \to x^2+y^2=3a^2$$ These form boundary region $G$. The bottom disk we can call $T$ of this region has outward normal $\hat{\mathbf{N}}=-\mathbf{k}$. Now as you calculated $\text{div}\,\mathbf{F}=2x+2y$. Now the divergence theorem states that the total flux outward this whole region is the sum of the flux of the bottom disk and the spherical cap $S$ (that we want to calculate). So $$\iint_S \mathbf{F} \bullet \hat{\mathbf{N}} dS+\iint_T \mathbf{F} \bullet (-\mathbf{k}) dS=\iiint_G \text{div}\,\mathbf{F} \, dV$$ Now because $G$ is symmetric about $x=0$ and $y=0$ the centroids are $\bar{x},\bar{y}=0$ and total flux $$\iiint_G \text{div}\,\mathbf{F} \, dV=(2\bar{x}+2\bar{y})\times\text{Volume of G} =0$$ which means flux outward $S$ equals \begin{align} \iint_S \mathbf{F} \bullet \hat{\mathbf{N}} dS &=-\iint_T \mathbf{F} \bullet (-\mathbf{k}) dS \\ &=\iint_T 3+x \; dxdy \\ &=(3+\bar{x})\times\text{area of T}, \;\;\text{where} \; \bar{x}=0 \; (\text{centroid}) \\ &=3\times\pi\times 3a^2\\ &=9\pi a^2 \end{align} Hope this satisfies!
Find all intial values for $\displaystyle y'-\frac{y}{2}=2 \cos t$
The limit doesn't exist only in one case: when $y_0=-\frac{4}{5}$, because in this case you have: $$y(t)=\frac{-4}{5}\cos t+\frac{8}{5}\sin t$$ When $y_0>-\frac{4}{5}$ you have something greater than zero near $e^{\frac{t}{2}}$, so the limit is $+\infty$. When $y_0<-\frac{4}{5}$ you have something smaller than zero near $e^{\frac{t}{2}}$, so the limit is $-\infty$.
If points (0,0), (1,0), and (x,y) are the vertices of a right triangle, determine any equations that x and/or y must satisfy
There is not one equation. There are two lines that $(x,y)$ can be on a circle. You should find an equation for each line and the circle. There is a way to combine the equations, but it obscures what is going on a little. The lines are $x=0$ and $x=1$. The circle is centered at $(\frac 12,0)$ with radius $\frac 12$, so is $(x-\frac 12)^2+y^2=\frac 14$ We can combine all that into $$x(x-1)\left[\left(x-\frac 12\right)^2+y^2-\frac 14\right]=0$$
A problem in single variable calculus
Either I am missing something, or this is not necessarily true. Let $a(t) = t^2 + 1$, $L = 1$. $$ \dot{a}^2 + \frac{a^2}{L^2} = (2t)^2 + (t^2 + 1)^2 \geq 1 $$ At $t_0 < 0$: $$ \begin{cases} \dot{a}(t_0) = 2t_0 < 0 \\ a(t_0) = t_0^2 + 1 > 0 \end{cases} $$ But $a(t)$ is always greater than zero. UPDATE With the new condition, $a(t) \leq L$, it's actually provable, assuming $a$ is differentiable on $\mathbb{R}$. Case $\exists t' : a(t') < 0$ Since $a$ is differentiable, it's continuous. $a(t_0) > 0$, $a(t') < 0$. The intermediate value theorem applied to $t_0$ and $t'$ implies $\exists \tilde{t} : a(\tilde{t}) = 0$. Case $\forall t : a(t) \geq 0$ Note that the differential condition as written implies $L \neq 0$. $$a \leq L \implies L > 0 \land \frac{a}{L} \leq 1$$ Now let's have a look at the first inequality again. $$\dot{a}^2 \geq 1 - \frac{a^2}{L^2} \iff \dot{a}^2 + \frac{a^2}{L^2} \geq 1 \implies \left ( \dot{a}(t) = 0 \implies a(t) = L \right )$$ We know that $\exists t_0 : \dot{a}(t_0) < 0$. If $\exists t_1 : \dot{a}(t_1) = 0$ then $t_1 < t_0$ — otherwise select $\alpha = \operatorname{argmin}(\{a(t) : t_0 \leq t \leq t_1\})$, by the extreme value theorem $\dot{a}(\alpha) = 0 \implies a(\alpha) = L \implies \forall t \in [t_0, t_1] \, a(t) = L \implies \dot{a}(t_0) \not< 0$, contradiction. Note that due to Darboux's theorem $\dot{a}$ cannot become positive without going through $0$. Unboundedness So $\forall t \geq t_0 \, \dot{a}(t) < 0$, the last problem we could have is $a$ being limited below. Let $M > 0$ be the lower bound. By Weierstrass's theorem $lim_{t \to \infty} a(t) = M$, or $\forall \epsilon > 0 \, \exists \delta \, \forall t > \delta \, |a(t) - M| < \epsilon$. $$ \dot{a} \leq -\sqrt{1 - \frac{a^2}{L^2}} \leq -\sqrt{1 - \frac{(M + \epsilon)^2}{L^2}} = -C $$ By the mean value theorem $a(t + 1) - a(t) = \dot{a}(\gamma) \leq -C \implies a(t) - C \geq a(t + 1)$, therefore $a(t + \left\lceil \frac{L}{C} \right\rceil + \ldots) < 0$, which brings us to the first case again. Conclusion To give an example, consider $a(t) = \sin(t)$ and $L = 1$. $$ \begin{cases} (cos(t))^2 + \frac{(sin(t))^2}{1^2} \geq 1 \\ sin(t) \leq 1 \end{cases} $$ At $t_0$ in the second quarter $\sin(t_0) > 0$ and $\cos(t_0) < 0$, also $\sin(0) = 0$.
If a probability is strictly positive, is it discrete?
First, note that any two atoms are either equal or disjoint. For if $F_1, F_2$ are atoms with $F_1 \cap F_2 \ne \emptyset$, then $F_1 \cap F_2^c$ is a strict measurable subset of $F_1$, hence empty, meaning $F_1 \subseteq F_2$, and the reverse inclusion by symmetry. Next, note that the number of atoms is at most countable; see Is a family of disjoints atoms in $\sigma$-finite neasurable space at most countable?. So the union $A$ of all atoms is measurable. If $P(\Omega \setminus A) = 0$ then $\Omega = A$ and the space is discrete, so we are done. Hence assume $P(\Omega \setminus A) > 0$. By rescaling this reduces us to the case of an atomless probability space. However, on an atomless probability space, there exists a random variable $U : \Omega \to \mathbb{R}$ whose distribution under $P$ is $U(0,1)$; see How to split an integral exactly in two parts. Now the events $\{U = x\}$, as $x$ ranges over $[0,1]$, are all measurable and have probability zero, so they must all be empty. But their union has probability 1, a contradiction.
Realizing polar function via Newtonian gravitation
Basically what you're saying is that the force $F(t)$ is always directed radially, but its magnitude varies arbitrarily. Actually you didn't say $M > 0$, and in fact negative "mass" may be required in some cases. Then the answer is yes. In polar coordinates $(r,\theta)$, your assumption says that $$2 \dot{r} \dot{\theta} + r \ddot{\theta} = 0$$ Basically this is conservation of angular momentum. If $r = f(\theta)$, the chain rule says $\dot{r} = f'(\theta) \dot{\theta}$, so $2 f'(\theta) \dot{\theta}^2 + f(\theta) \ddot{\theta} = 0$. For any given function $f$ with $f > 0$, a solution of the differential equation $2 f'(\theta) \dot{\theta}^2 + f(\theta) \ddot{\theta}$ with, say, $\theta(0) = 0$ and $\dot{\theta}(0)=1$ gives us a motion that starts at $\theta = 0$ and stays on the curve $r = f(\theta)$. Now the existence and uniqueness theorems for differential equations say that (if $f$ is smooth and $f > 0$ everywhere) a unique solution exists in some interval of time $t$. Moreover, the only way such a solution will stop existing is that $\theta$ goes off to $\infty$ in finite time. But we're assuming $f$ is periodic, and because of the conservation of angular momentum you have to come back to the starting position with the same speed as you started, so that won't happen.
Use pythagoras in a cuboid to find x.
First find the diagonal of the top side (using Pythagoras theorem in the triangle formed by the top side's diagonal, length and width): $(x + 8) ^ 2 + (2x + 4) ^ 2 = D^2$ Then using pythagoras theorem in the triangle formed by the body diagonal, top side's diagonal and the height we get: $D^2 + (2x - 1) ^ 2 = (3x + 9) ^ 2$ Comparing the 2 above equations we get: $(x + 8) ^ 2 + (2x + 4) ^ 2 = (3x + 9) ^ 2 - (2x - 1) ^ 2$ Now solve for $x$.
How to represent percentages to degrees on a graph?
Use proportions. If $x$ is the angle corresponding to $50\%$, then $$ \frac{x}{90} = \frac{50}{100} $$ By the way, this manner of measuring angles is referred to as gradians.
What is this group $O(2)/\mathbb{Z}_2$?
Interestingly, the group $PO(2, \Bbb R)$ is isomorphic to $O(2, \Bbb R)$. To see this, let us write $r(\theta)$ for $\begin{pmatrix}\cos \theta & \sin\theta \\-\sin\theta&\cos\theta\end{pmatrix}$ and write $t(\theta)$ for $\begin{pmatrix}\cos \theta & \sin\theta \\\sin\theta&-\cos\theta\end{pmatrix}$. We then have: \begin{eqnarray}r(\theta)r(\theta') &=& r(\theta + \theta'),\\r(\theta)t(\theta') &=& t(\theta' - \theta),\\t(\theta)r(\theta') &=& t(\theta + \theta'),\\t(\theta)t(\theta') &=& r(\theta' - \theta).\end{eqnarray} Now define a map $\phi:O(2, \Bbb R) \rightarrow O(2, \Bbb R)$ sending $r(\theta)$ to $r(2\theta)$ and $t(\theta)$ to $t(2\theta)$. From the above formulas, it is clear that $\phi$ is a surjective (topological) group homomorphism. The kernel of $\phi$ is clearly the subgroup $\{\pm I_2\}$, thus $\phi$ induces an isomorphism from $PO(2, \Bbb R)$ to $O(2, \Bbb R)$.
If $G$ is a finite group and $H$ is a normal subgroup of $G$,then prove that $o(G/H)=o(G)/o(H).$
You don't need $H$ to be normal in $G$. Just use the fact that there is a bijection between any two cosets, and that they form a partition on $G$. You can find this in any standard book on abstract algebra. See for instance PROPOSITION 1.25 in http://www.jmilne.org/math/CourseNotes/GT310.pdf
How do I prove: Let n ∈ N+. Let m ∈ N+. m<n. Prove that n⊥m ⇒ (n−m)⊥ m.
We show that if $m$ and $n$ are relatively prime, then $n-m$ and $m$ are relatively prime. Suppose to the contrary that $n-m$ and $m$ are not relatively prime. Then some $d\gt 1$ divides both $n-m$ and $m$. But then $d$ divides $m$ and $(n-m)+m=n$, contradicting the fact that $m$ and $n$ are relatively prime.
Is it possible to evaluate the integral $I=\int_0^\infty \frac{\sqrt{x}\arctan(x)}{1+x^2}dx$ with residue theorem?
Notice that : \begin{aligned}I=2\int_{0}^{+\infty}{\frac{x\arctan{x}}{2\sqrt{x}\left(1+x^{2}\right)}\,\mathrm{d}x}&amp;=2\int_{0}^{+\infty}{\frac{x^{2}\arctan{\left(x^{2}\right)}}{1+x^{4}}\,\mathrm{d}x}\\ &amp;=\int_{-\infty}^{+\infty}{\frac{x^{2}\arctan{\left(x^{2}\right)}}{1+x^{4}}\,\mathrm{d}x}\\ I&amp;=\int_{0}^{1}{\int_{-\infty}^{+\infty}{\frac{x^{4}}{\left(1+x^{4}\right)\left(1+x^{4}y^{2}\right)}\,\mathrm{d}x}\,\mathrm{d}y}\end{aligned} Now we can apply the residue theorem to $ f_{y}:z\mapsto\frac{z^{4}}{\left(1+z^{4}\right)\left(1+z^{4}y^{2}\right)} $, for $y\in\left(0,1\right] $, on the countour $ \mathscr{C}_{R}=\left[-R,R\right]\cup\Gamma_{R} $, where $ R&gt; \frac{1}{y} $ : We know : \begin{aligned} \int_{\mathscr{C}_{R}}{f_{y}\left(z\right)\mathrm{d}z}&amp;=2\pi\,\mathrm{i}\left(\mathrm{Res}\left(f_{y},\mathrm{e}^{\mathrm{i}\frac{\pi}{4}}\right)+\mathrm{Res}\left(f_{y},-\mathrm{e}^{-\mathrm{i}\frac{\pi}{4}}\right)+\mathrm{Res}\left(f_{y},\frac{\mathrm{e}^{\mathrm{i}\frac{\pi}{4}}}{\sqrt{y}}\right)+\mathrm{Res}\left(f_{y},-\frac{\mathrm{e}^{-\mathrm{i}\frac{\pi}{4}}}{\sqrt{y}}\right)\right)\\ &amp;=2\pi\,\mathrm{i}\left(\frac{\mathrm{e}^{\mathrm{i}\frac{\pi}{4}}}{4\left(1-y^{2}\right)}-\frac{\mathrm{e}^{-\mathrm{i}\frac{\pi}{4}}}{4\left(1-y^{2}\right)}-\frac{\mathrm{e}^{\mathrm{i}\frac{\pi}{4}}}{4\sqrt{y}\left(1-y^{2}\right)}+\frac{\mathrm{e}^{-\mathrm{i}\frac{\pi}{4}}}{4\sqrt{y}\left(1-y^{2}\right)}\right)\\ \int_{-R}^{R}{f_{y}\left(x\right)\mathrm{d}x}+\int_{\Gamma_{R}}{f\left(z\right)\mathrm{d}z}&amp;=\frac{\pi\sqrt{2}}{2\sqrt{y}\left(1-y^{2}\right)}-\frac{\pi\sqrt{2}}{2\left(1-y^{2}\right)} \end{aligned} We have : $$ \left|\int_{\Gamma_{R}}{f_{y}\left(z\right)\mathrm{d}z}\right|\leq\int_{\Gamma_{R}}{\left|f\left(z\right)\right|\left|\mathrm{d}z\right|}\leq\frac{R^{2}}{\left(R^{4}-1\right)\left(R^{4}y^{2}-1\right)}\int_{\Gamma_{R}}{\left|\mathrm{d}z\right|}=\frac{R^{4}\pi}{\left(R^{4}-1\right)\left(R^{4}y^{2}-1\right)}\underset{R\to +\infty}{\longrightarrow}0 $$ Thus, taking $ R $ to $ +\infty $, we get : $$ \int_{-\infty}^{+\infty}{f_{y}\left(x\right)\mathrm{d}x}=\frac{\pi\sqrt{2}}{2\sqrt{y}\left(1+\sqrt{y}\right)\left(1+y\right)} $$ That means : \begin{aligned} I=\pi\sqrt{2}\int_{0}^{1}{\frac{\mathrm{d}y}{2\sqrt{y}\left(1+\sqrt{y}\right)\left(1+y\right)}}&amp;=\pi\sqrt{2}\int_{0}^{1}{\frac{\mathrm{d}y}{\left(1+y\right)\left(1+y^{2}\right)}}\\ &amp;=\frac{\pi\sqrt{2}}{2}\int_{0}^{1}{\left(\frac{1}{1+y}+\frac{1-y}{1+y^{2}}\right)\mathrm{d}y}\\ &amp;=\frac{\pi\sqrt{2}}{2}\left[\ln{\left(1+y\right)}+\arctan{y}-\frac{\ln{\left(1+y^{2}\right)}}{2}\right]_{0}^{1}\\ I&amp;=\frac{\pi\sqrt{2}}{8}\left(\pi+\ln{4}\right) \end{aligned}
How to show that a limit of a function exists with a variable
set $t = 5x$ and then solve $\lim_{x\to 0}\frac{x}{\ln(5x+1)} = \lim_{t\to 0}\frac{\frac15t}{\ln(t+1)}$
Find the volume of a right circular cone formed by joining the edges of a sector of a circle of radius r cm where the sector angle is 90 degrees.
You can use the circumference of the circle to get the circumference of the base of the cone. This will give you the radius of the base of the cone. Additionally, you have the slant height from the original radius - from these two, you can get the height of the cone.
$\frac{\mathbb{Z}}{m \mathbb{Z}} \otimes \mathbb{Q} \cong 0$ is an application of general property about tensor products?
If $G$ is an abelian group such that each of its elements has finite order, then $G\otimes\mathbb Q=\{0\}$. In fact, if $g\in G$ and if $m\in\mathbb N$ is such that $mg=0$, then, for each $q\in\mathbb Q$,$$g\otimes q=g\otimes\left(m\frac qm\right)=(mg)\otimes\frac qm=0\otimes\frac qm=0.$$
Decreasing sequence in subgroup
Whenever the infimum $a$ of any non-empty subset $S$ of the real numbers that is bounded below has $a\notin S$, one must have $(a,a+\varepsilon)\cap S\neq\emptyset$ for all $\varepsilon&gt;0)$, by the definition of infimum. Now choosing $s_n\in(a,a+2^{-n})\cap S$ for $n\in\Bbb N$ ensures that $\lim_{n\to\infty}s_n=a$.
Show that there exist $\{x_n\}$ a sequence of elements in $A- \{x\}$ such that $x_n \to x$
Hint: Since $x$ is an accumulation point of $A$ for each $n$ in there exist an $x_n\in A-\{x\}$ such that $||x-x_n||&lt;\frac{1}{n}$.
Finding a ring isomorphism
Define $$\psi : R/\phi^{-1}(J) \to R'/J$$ $$\psi(r + \phi^{-1}(J) )= \phi(r) + J$$ Then this map is: i) well-defined: infact if $r + \phi^{-1}(J) = r' + \phi^{-1}(J) $ then $r-r' \in \phi^{-1}(J)$ and so $\phi(r) - \phi(r') \in J$ ii) a ring homomorphism: because $\phi$ is a ring homomorphism iii) surjective: because $\phi$ is surjective iv) injective: if $\psi(r + \phi^{-1}(J) )= \phi(r) + J= 0$ then $\phi(r) \in J \Rightarrow r \in \phi^{-1}(J) \Rightarrow r + \phi^{-1}(J) = 0$
Complete metric spaces - continuous functions
Sketch: Suppose that $(f_n)$ is a Cauchy sequence in $C(X,Y)$. Informally, being a Cauchy sequence means that 'it wants to converge to a point but that point might be missing from the space'. Using the definition of $d$, show that $f_n(x)$ is a Cauchy sequence for all $x$, then use that $Y$ is complete to get $f(x)$. Finally, verify that $d(f_n,f)\to 0$ if $n\to\infty$. By the previous statement, as $\Bbb R$ is complete, $C([0,1],\Bbb R)$ is complete. Let $(g_n)$ be a Cauchy sequence in the image of the restriction map $R$, so that $g_n=R(f_n)$ for some $f_n:[0,1]\to\Bbb R$. By continuity of $f_n$, we can derive that $R$ is an isometry, i.e. it preserves distance: $d(f,g)=d(R(f),\,R(g))$. It follows that $(f_n)$ is also Cauchy, living in a complete space, so it converges to some $f$, and then of course $g_n\to R(f)$ will hold.
Example of non-associative composition of morphisms
I don't know how much category theory you know, but I guess you can look up any terms you don't recognize. In a cateogry, composition is associative per definition. However, when generalizing categories to higher categories ($n$-categories), it is sometimes useful not to demand associativity, but only "weak associativity". Weak associativity is the same as associativity up to isomorphism in the layer above. That is, for any triple $f,g,h$ of morphisms where $gf$ and $hg$ are defined, there is an "isomorphism of morphisms" $F: (hg)f \rightarrow h(gf)$. For example, if we define a path in a topological space $X$ to be a continuous function $\alpha:[0,1]\rightarrow X$, there is a well defined operation of "composition" of paths which is weakly associative, but not associative. In this case we get a 2-category where composition is associative up to homotopy.
How often is $a^b$ rational when $a,b>0$ are irrational.
Let $q &gt; 0; q \in \mathbb Q$. Let $a$ be irrational such that $a^r \ne q$ for any $r \in \mathbb Q$. (For instance let $a$ be transcendental; or if $q = 2$ and $a = \sqrt 3$) Then $b = \log_a q$ is irrational. Then $a^b = a^{\log_a q} = q$. There are a huge number of these and easy to find. Second part. Let $S = \{(x,\log_x q)| x \text{ transcendental}; x &gt; 0; q \in \mathbb Q^+\} \subset P$. Technically $S = \cup_{x\in \mathbb T}\{x\} \times \{\log_x q| q \in Q\}$ It's obvious $\{\text {positive transcendental}\}$ is dense in $\mathbb R^{\ge 0}$. (Their complement, the algebraics, are countable for one thing). Likewise for any $x &gt; 0; x\ne 1$ and any positive $w &lt; z$ we can find rational $q$ so that $q$ is between $x^w$ and $x^w$ which are not equal. So $w &lt; \log_x q &lt; z$. And so for any positive real $x; x \ne 1$, in particular for any positive transcendental $x$, $\{\log_x q| q \in \mathbb Q+\}$ is dense in $\mathbb R^{\ge 0}$. So $\overline S = \overline{\cup_{x\in \mathbb T}\{x\} \times \overline{\{\log_x q| q \in Q^+\}}}=\overline{\cup_{x \in \mathbb T}\{x\} \times \mathbb R^{\ge 0}} = \mathbb R^{\ge 0} \times \mathbb R^{\ge 0}$ So $(\mathbb R^{\ge 0})^2\ = \overline{S} \subset \overline{P} \subset (\mathbb R^{\ge 0})^2$
A box has 100 different balls. No of Balls that can be selected = 1 to 100. How many ways can we select?
Each ball is either picked or not, so for each ball we have two possibilities, and so the total number of possible draws is $2^{100}$. From that we need to subtract one way of picking where none of the balls got picked. This leaves us with $2^{100}-1$ ways altogether.
STEP past question: Showing that $a^2$ is a root of the following equation
Substituting for $p$, $q$, and $r$ in the cubic, we have$$u^3 + 2(b + c - a^2)u^2 + ((b + c - a^2)^2 - 4bc)u - (ab - ac)^2 = 0.$$This is equivalent to $$u^3 + 2(b + c - a^2)u^2 + (b^2 + c^2 - 2bc - 2a^2b - 2a^2c)u - a^2(b-c)^2 = 0.$$Now put $u = a^2$ to give$$a^6 + 2ba^4 + 2ca^4 - 2a^6 + b^2a^2 + c^2a^2 + a^6 - 2bca^2 - 2a^4b - 2a^4c - a^2b^2 + 2a^2bc - a^2c^2 = 0,$$which means that $u = a^2$ is a root of the cubic.
Basic doubt on Stirling numbers of Second Type
Two reasons. First, the usual way avoids any question about the meaning of $\sum_{j=0}^{-1}$. Second, and more important, $(k-j)^n$ is not $0$ when $j=k=n=0$: it’s $1$.
Find positive integers $a$ such that there are exactly distinct $2014$ positives integers $b$ satisfied $2 \le \dfrac{a}{b} \le 5$.
Your solution isn't quite correct, as I discuss at the end. Instead, as trisct's comment indicates, it's easier to adjust your inequality to bound the $b$ values by multiples of $a$. In particular, with $$2 \le \frac{a}{b} \le 5 \tag{1}\label{eq1B}$$ First, multiply the left &amp; middle parts by $\frac{b}{2}$ to get $$b \le \frac{a}{2} \tag{2}\label{eq2B}$$ Next, multiply the middle &amp; right parts of \eqref{eq1B} by $\frac{b}{5}$ to get $$\frac{a}{5} \le b \tag{3}\label{eq3B}$$ You can combine \eqref{eq2B} and \eqref{eq3B} into one set of inequalities of $$\frac{a}{5} \le b \le \frac{a}{2} \tag{4}\label{eq4B}$$ For there to be exactly $2014$ values of $b$ (with all of them being consecutive) requires that the difference between the right &amp; left sides of \eqref{eq4B} be at least $2013$, but less than $2015$. Thus, you get $$2013 \le \frac{a}{2} - \frac{a}{5} = \frac{3a}{10} \lt 2015 \tag{5}\label{eq5B}$$ Multiplying all parts by $\frac{10}{3}$ gives $$6710 \le a \lt 6716\;\frac{2}{3} \tag{6}\label{eq6B}$$ This gives up to $7$ possible values of $a$. However, it's possible some of them may not give the correct number of values of $b$, so you should check each one. To make it a bit simpler, let $a = 6710 + c$, with $0 \le c \le 6$. Then \eqref{eq4B} becomes $$1342 + \frac{c}{5} \le b \le 3355 + \frac{c}{2} \tag{7}\label{eq7B}$$ For $c = 0$, you get $1342 \le b \le 3355$, so there's $2014$ values of $b$. However, for $c = 1$, you get $1342.2 \le b \le 3355.5$, so there's only $2013$ values of $b$ in this case. Continuing, you'll find that $c = 2,3$ work, while $c = 4,5,6$ don't. Thus, the final set of values of $a$ which work are $\{6710,6712,6713\}$. Note this includes your solution of $6710$, and also includes $2$ others you didn't find. I see two issues with your written solution. First, you tried using $2b_{2014} \le a \le 5b_1$. Note the inequality is for each value of $b$, so you can't just use the largest on the left and the smallest on the right. Next, you have $2b_{2014} \le 5(b_{2014} - 2013) \iff b_{2014} \le 3355$. However, the correct result from this is $3355 \le b_{2014}$ instead.
invariance of dimension under diffeomorphism of real subspaces
Thanks for the comments. After giving it more thought I was able to solve it on my own. Here's what I did. Suppose $U\subset \mathbb{R}^{n+m},V\subset \mathbb{R}^m$ and $n&gt;0$ (if $n&lt;0$ just consider $f^{-1}$ instead of $f$ in what follows). We have $f&#39;=\left( \begin{array}{cccccc} \frac{\partial f_1}{\partial x_1} &amp; \ldots &amp; \frac{\partial f_1}{\partial x_n} &amp; \frac{\partial f_1}{\partial y_1} &amp; \ldots &amp; \frac{\partial f_1}{\partial y_m} \\ \ldots &amp; \ldots &amp; \ldots &amp; \ldots &amp; \ldots &amp; \ldots \\ \frac{\partial f_m}{\partial x_1} &amp; \ldots &amp; \frac{\partial f_m}{\partial x_n} &amp; \frac{\partial f_m}{\partial y_1} &amp; \ldots &amp; \frac{\partial f_m}{\partial y_m} \end{array} \right)=\left( \begin{array}{cc} \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(x_1,\ldots ,x_n\right)} &amp; \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)} \end{array} \right)$ $\left(f^{-1}\right)&#39;=\left( \begin{array}{ccc} \frac{\partial x_1}{\partial f_1} &amp; \ldots &amp; \frac{\partial x_1}{\partial f_m} \\ \ldots &amp; \ldots &amp; \ldots \\ \frac{\partial x_n}{\partial f_1} &amp; \ldots &amp; \frac{\partial x_n}{\partial f_m} \\ \frac{\partial y_1}{\partial f_1} &amp; \ldots &amp; \frac{\partial y_1}{\partial f_m} \\ \ldots &amp; \ldots &amp; \ldots \\ \frac{\partial y_m}{\partial f_1} &amp; \ldots &amp; \frac{\partial y_m}{\partial f_m} \end{array} \right)=\left( \begin{array}{c} \frac{\partial \left(x_1,\ldots ,x_n\right)}{\partial \left(f_1,\ldots ,f_m\right)} \\ \frac{\partial \left(y_1,\ldots ,y_m\right)}{\partial \left(f_1,\ldots ,f_m\right)} \end{array} \right)$ $\left(f^{-1}\right)&#39;\cdot f&#39;=\left( \begin{array}{cc} \frac{\partial \left(x_1,\ldots ,x_n\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(x_1,\ldots ,x_n\right)} &amp; \frac{\partial \left(x_1,\ldots ,x_n\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)} \\ \frac{\partial \left(y_1,\ldots ,y_m\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(x_1,\ldots ,x_n\right)} &amp; \frac{\partial \left(y_1,\ldots ,y_m\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)} \end{array} \right)=\left( \begin{array}{cc} I_{(n\times n)} &amp; 0 \\ 0 &amp; I_{(m\times m)} \end{array} \right)=I_{(n+m\times n+m)}$ It follows that $\frac{\partial \left(y_1,\ldots ,y_m\right)}{\partial \left(f_1,\ldots ,f_m\right)}\cdot \frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)}=I_{(m\times m)}$ Since the Jacobian matrix $\frac{\partial \left(f_1,\ldots ,f_m\right)}{\partial \left(y_1,\ldots ,y_m\right)}$ is not singular, we can apply the implicit function theorem. It follows that there is a function $g:A\to B$, where $A\subset \mathbb{R}^n,B\subset \mathbb{R}^m$ are open subsets and $A\times B\subset U$, such that $f(x,g(x))=\text{const}$ for all $x\in A$. It is then clear that $f$ cannot be bijective.
Solving the integral $\int_0^{\pi/2}\log\left(\frac{2+\sin2x}{2-\sin2x}\right)\mathrm dx$
$$J=\int_0^{\pi/2}\ln\left(\frac{2+\sin2x}{2-\sin2x}\right)\mathrm dx\overset{2x=t}=\frac12 \int_0^\pi \ln\left(\frac{1+\frac12\sin t}{1-\frac12\sin t}\right)\mathrm dt=\int_0^\frac{\pi}{2}\ln\left(\frac{1+\frac12\sin x}{1-\frac12\sin x }\right)\mathrm dx$$ Now let's consider the following integral: $$I(a)=\int_0^\frac{\pi}{2}\ln\left(\frac{1+\sin a\sin x}{1-\sin a\sin x}\right)dx\Rightarrow I'(a)=2\int_0^\frac{\pi}{2} \frac{\sin a\sin x}{1-\sin^2a\sin^2 x}dx$$ $$=\frac{2}{\sin a}\int_0^\frac{\pi}{2} \frac{\sin x}{\cos^2x +\cot^2 a}dx=\frac{2}{\sin a}\arctan\left(x\tan a\right)\bigg|_0^1=\frac{2a}{\sin a}$$ $$I(0)=0 \Rightarrow J=I\left(\frac{\pi}{6}\right)=2\int_0^\frac{\pi}{6}\frac{x}{\sin x}dx$$ $$=2\int_0^{\frac{\pi}{6}} x \left(\ln\left(\tan \frac{x}{2}\right)\right)'dx=2x \ln\left(\tan \frac{x}{2}\right)\bigg|_0^{\frac{\pi}{6}} -2{\int_0^{\frac{\pi}{6}} \ln\left(\tan \frac{x}{2}\right)dx}=$$ $$\overset{\frac{x}{2}=t}=\frac{\pi}{3}\ln(2-\sqrt 3) -4\int_0^\frac{\pi}{12}\ln (\tan t)dt=\frac{\pi}{3}\ln(2-\sqrt 3) +\frac{8}{3}G$$ $G$ is the Catalan's constant and for the last integral see here. Also note that there's a small mistake. After integrating by parts you should have: $$2I=\frac{\pi^2}{4\sqrt 3}- \int_0^\infty\frac{(x^2-1)\arctan x}{x^4+x^2+1}dx=\frac{\pi^2}{4\sqrt 3}-\frac12\underbrace{\int_0^\infty \ln\bigg(\frac{x^2-x+1}{x^2+x+1}\bigg)\frac{dx}{1+x^2}}_{=J}$$
How do I find the angle of intersecting circles?
Lets call your target angle $\theta$ The radius of the big circle $r_1$ and the small circle $r_2$ $2 r_1 \sin \frac \theta4 = r_2$ How did I get that? construct radii from the big circle to the center of the small circle and to one of the intersection points. the angle is $\frac 12 \theta$ Now bisect that. you have a right triangle with hypotenuse $r_1$ and the opposite leg equals $r_1\sin \frac {\theta}4$ It is also $\frac 12 r_2$
Is there difference between existence of integral and integrability
My understanding is that when we say an integral (any type) exists, it means the function is integrable (finite integral) or the integral is infinite. However, Riemann integral is only defined for bounded functions defined on a bounded interval $[a,b]$. (At least that's the case I've seen in all the text books.) So when the integral exists, it must be finite. One way to define Riemann integral is through upper and lower Riemann integrals. The upper Riemann integral is defined as $$U(f)=\inf\left\{\sum_i \left(\sup_{x_{i-1}\le x \le x_i}f(x)\right)\cdot (x_i-x_{i-1}):\text{ all partitions } (x_0,x_1,\cdots,x_n)\right\},$$ and the lower Riemann integral is defined to be $$L(f)=\sup\left\{\sum_i \left(\inf_{x_{i-1}\le x \le x_i}f(x)\right)\cdot (x_i-x_{i-1}):\text{ all partitions } (x_0,x_1,\cdots,x_n)\right\}.$$ The function $f$ is called Riemann integrable if $U(f)=L(f)$, and the common value is called the Riemann integral. Clearly, since we only consider bounded functions $f$: $|f|\le M$, we always have $$-M\le L(f)\le U(f)\le M,$$ and the Riemann integral is always finite if $L(f)=U(f)$. The case of Riemann-Stieltjes integral should be similar. If you take the definitions from Wikipedia, there's no infinite Riemann(-Stieltjes) integral because the definition requires the existence of a real number $A$ which is the limit of Riemann sum.
Reference for real analytic manifolds
Let $\overline R$ be the closure of the set of points where $dF_x$ has full rank. If this is not everything, pick a point $x \in \partial \overline R$. Pick a real analytic chart around this point $x$; because the partial derivatives of real analytic functions are real analytic, $dF$ is real analytic. Supposing for notational convenience that $\dim M =n$ (though there is no reason to make this assumption), $\det(dF)$ is also an analytic function. Because it is nonzero somewhere, it must be nonzero on a dense open set (in this chart) - otherwise there will be a point for which $\det(dF)$ vanishes in a small open set around it, and hence $dF$ vanishes everywhere by the identity theorem, contradicting the choice $x \in \partial \overline R$. Note that there was no point here where we needed the codomain to be $\Bbb R^n$ as opposed to some other analytic manifold.
How to find indefinite integral $\int a^{\frac {1}{x}} \mathrm dx$?
Let $t = \log(a)$. Make $u$-substitution $u=\frac{1}{x}$: $$ \int \exp\left(\frac{t}{x}\right) \mathrm{d}x = \int \exp\left( t u \right) \mathrm{d} \left(\frac{1}{u}\right) \stackrel{\text{by parts}}{=} \frac{\exp(t u)}{u} - t \int \frac{\exp(t u)}{u} \mathrm{d} u \stackrel{u \to u/t}{=} \left( \frac{\exp(t u}{u} - t \int \frac{\exp(u)}{u} \mathrm{d} u \right) $$ The latter integral is non-elementary. A special function exponential integral $\operatorname{Ei}(u)$ has the require antiderivative. Answers to the earlier question is a recommended read.
Does $\text{Tr}A \leq \text{Tr}B \implies \text{Tr}PA \leq \text{Tr}QB$?
No, we cannot conclude that (not even for $m=n$ and $P=Q$). Pick $A=diag(1,0)$, $B=diag(0,1)$ and $P$ the projection onto the first coordinate, then $PA=A$ and $PB=0$, so $tr(PA)=1&gt;0=tr(PB)$.
Genus of the desingularization of a plane curve
The answer for all questions are yes. First, let $f\in p$ be a non-zero element. Then $A_f=\mathrm{Frac}(A)$ because its only prime ideal if $0$ ($A$ has two prime ideals, one of them, the maximal one, contains $f$, hence disappears in $A_f)$. So $B_f=A_f$ and for any $b\in B$, $f^nb\in A$ for some power of $f$. As $B$ is finitely generated over $A$ (as module), we can choose the same $n$ for all elements of $B$. For the second question, note that the normalization of $\hat{A}$ is $\hat{B}$.
Why is $\mathbb{C}_p$ isomorphic to $\mathbb{C}$?
Step 1: Any complete metric space without isolated points has cardinality at least $\mathfrak{c}$ (continuum cardinality). I got a nice explanation of this here. In particular $\mathbb{C}_p$ has cardinality at least $\mathfrak{c}$. Step 2: As wikipedia knows, a topological space which is Hausdorff, first countable and separable (so in particular a separable metric space) has cardinality at most $\mathfrak{c}$: indeed, every point is the limit of a sequence from a countable set, and $\aleph_0^{\aleph_0} = \mathfrak{c}$. Now $\overline{\mathbb{Q}}$ is dense in $\overline{\mathbb{Q}_p}$ (a consequence of Krasner's Lemma: see e.g. $\S 3.5$ of these notes) and hence also in its completion $\mathbb{C}_p$. So $\mathbb{C}_p$ has cardinality at most $\mathfrak{c}$. [Added: Here is alternate -- less elegant but more elementary -- argument for Step 2: (i) For any infinite field $K$, the cardinality of $K$ is equal to the cardinality of its algebraic closure. (ii) For any metric space $X$, the cardinality of the completion of $X$ is at most $\# X^{\aleph_0}$ (the number of sequences with values in $X$). By standard facts on cardinal exponentation, we have $\# X \leq \mathfrak{c} \iff \# X^{\aleph_0} \leq \mathfrak{c}$.] Thus by the Schroder-Bernstein Theorem, $\mathbb{C}_p$ has cardinality $\mathfrak{c}$.
What does the conclusion with confidence interval mean, non-statistically speaking?
Under the frequentist interpretation, the confidence interval is some interval $(a,b)$ where $a,b$ are random numbers, which are determined by the sample. For a $95\%$ confidence interval for estimating some quantity $\mu$, $95\%$ of the time you will have $a&lt;\mu&lt;b$. Note that in a fixed sample, either $a&lt;\mu&lt;b$ is true or it isn't. We never really know which. But under the frequentist interpretation we can say that if we collected many confidence intervals, $95\%$ of them would contain $\mu$ and $5\%$ would not contain $\mu$. Note that under this interpretation it is not valid to say that $\mu$ is in our single interval $(a,b)$ with probability $0.95$, which is what you would naively expect. Under the Bayesian interpretation, something like this is indeed true, but there are subtleties involved in the Bayesian interpretation which are more difficult to explain.
Basis vectors derivation from the position vector in general curvilinear coordinates
You should be considering $\vec R=\sum_i R^i\vec e_i$ (instead of $R=\sum_i x^i\vec e_i$) where each $R^i$ are functions of the euclidean coordinates $x^j$. It is necessary that the Jacobian matrix $J\vec R=\left[\frac{\partial R^i}{\partial x^j}\right]$ has determinant different from zero in the domain of definition of $R$. A new basis for the space in these coordinates are $$\vec \partial_k=\sum_s\frac{\partial R^s}{\partial x^k}\vec e_s,$$ which give you, may be, a non-orthogonal neither non-orthonormal frame of the space.
Induction on the Fibonacci sequence?
Since the $F_n$ are (uniquely) defined by $$F_0=0,\qquad F_1=1,\qquad F_n=F_{n-1}+F_{n-2}\text{ if }n\ge2,$$ you have to show that $f(n):=\frac{\phi^n-\hat\phi^n}{\sqrt 5}$ also fulfills $$f(0)=0,\qquad f(1)=1,\qquad f(n)=f(n-1)+f(n-2)\text{ if }n\ge2.$$ Thus you verify $F_0=f(0)$ and $F_1=f(1)$ directly and for $n\ge 2$ you conclude (from the assumption that $F_k=f(k)$ for $0\le k&lt;n$) that also $f(n)=f(n-1)+f(n-2)=F_{n-1}+F_{n-2}=F_n$.
What are the odds of winning this bingo game?
The probability of winning depends on the way you choose the rows. To have a vague idea of the numbers you can look at what happens if each row is filled at random (this is clearly not optimal, for instance it includes the possibility that you use two identical rows). For a single row, the probability of winning is the ratio of draws of 20 numbers out of 75 that include the 8 selected numbers in the row to the total number of draws of 20 numbers out of 75, i.e. $$ p=\frac{75-8 \choose 20-8}{75 \choose 20}= \frac{1}{133\,929}. $$ If you play $n$ rows, each independently and randomly selected, then the probability of winning is $$ 1-(1-p)^n. $$ If you want that to be $70~\%$ you solve for $n$: $$ 1-(1-p)^n=.7 \rightarrow n=161\,246. $$ This is clearly an upper bound, as a better strategy should do better that selecting rows at random. Edit: details on the computation of $p$. Once you have chosen your 8 numbers, the number of draws of 20 numbers out of 75 that will make you a winner is the number of ways to pick your 8 numbers exactly (just one way) and then to pick the remaining 12 numbers out of the remaining 67 numbers: $1\times{67\choose12}$. The total number of ways to pick 20 numbers remains ${75\choose20}$. The probability is the ratio of favourable draws to total draws, thus the above formula.
Norm Product expression
It is very simple $$\|ABv\|_{W}\le \|A\|_{V\rightarrow W}\|Bv\|_{V}\le \|A\|_{V\rightarrow W}\|B\|_{U\rightarrow V}\|v\|_{V}$$ for all $v$, then immediately we get, $$\|AB\|_{U\rightarrow W}\le \|A\|_{V\rightarrow W}\|B\|_{U\rightarrow V}.$$
Using valid argument forms to derive a conclusion from given premises?
\begin{align} \lnot p \vee q \rightarrow r \tag{1}\\ s \vee \lnot q \tag{2} \\ \lnot t \tag{3}\\ p \rightarrow t \tag{4}\\ \lnot p \wedge r \rightarrow \lnot s \tag{5}\end{align} As you note in your post, from the fourth premise we have $p\to t$, and we have, in the third premise, $\lnot t$. By modus tollens, we derive $\lnot p$. Now, since we've derived $\lnot p$ from premises (3), (4), we also have $\lnot p \lor q$, by "addition" to $\lnot p$, (also called "or-introduction" which is shorthand for disjunction introduction). And from $\lnot p \lor q$, together with the premise (1), we have $r$ by modus ponens. Now, since we already deduced $\lnot p$, and we just deduced $r$, we can use "And-introduction (conjunction-introduction)" to get $\lnot p \land r$. Given $\lnot p \land r$, and premise (5): $(\lnot p \land r) \to \lnot s$, we have, by modus ponens, $\lnot s$. But given our premise (2), $s \lor \lnot q,$ together with $\lnot s$, we deduce $\lnot q$, as desired.
Curvature Blindness Illusion: Any mathematical explanation?
My percept matches what you would get if you replaced each segment with its best-fitting circular arc. For the zigzag case the best-fitting circular arc has zero curvature, i.e. it is a straight line segment. If this hypothesis is correct, in the other case the curve should appear more "rounded" than a usual sine wave, and to be honest it kind of does.
Field extensions of finite degree and primitive elements
Let $K\supseteq F$ be fields of characteristic $0$ such that any element of $K$ has degree $\leq n$ over $F$. If $[K:F]=1$ we are done, and if not then there is some $r_1\in K\setminus F$. Let $E_1=F(r_1)$. Observe that $[E_1:F]\leq n$ by hypothesis. If $[K:E_1]=1$ then $[K:F]=[E_1:F]$ and we are done, and if not then there is some $r_2\in K\setminus E_1$, and let $E_2=E_1(r_2)=F(r_1,r_2)$. Note that because $E_2$ is a finite separable extension of $F$, it is simple, say with $E_2=F(s)$. Thus, again by hypothesis, we have $[E_2:F]\leq n$. It is clear that we can repeat this argument any finite number of times. If we have not found an $m$ for which $K=E_m$ by the time we reach $E_d$ where $d=\lceil\log_2(n)\rceil+1$, then $$[E_d:F]=\underbrace{[E_d:E_{d-1}]}_{\geq 2}\cdots\underbrace{[E_1:F]}_{\geq 2}&gt;n$$ but this contradicts $[E_d: F]\leq n$. Thus $K=E_m$ for some $m$, so that $[K:F]\leq n$. (I've edited out my previous, unnecessarily-complicated argument using Zorn's lemma. Thanks to Mariano for catching that there was a simpler way.) As Curtis points out, it is possible for every element of $K$ to have finite degree over $F$, while the extension $K/F$ has infinite degree. For example, take $F=\mathbb{Q}$ and $K=\mathbb{Q}(\sqrt{2},\sqrt{3},\sqrt{5},\ldots)$, or $K=\overline{\mathbb{Q}}$. Note that finite is different than bounded.
Find sequence of functions $f_n \in C_0^\infty (-1,1)$ converging to $0$ in $L^2$ but $f_n(0) \rightarrow \infty$
The following does not depend on $n$: $$ \int_{-\infty}^{\infty}e^{-x^2}dx=\int_{-\infty}^{\infty}ne^{-(ny)^2}dy $$ So $f_n=\sqrt{n}e^{-(ny)^2/2}$ defines a sequence $\{ f_n \}_{n=1}^{\infty}$ of unit vectors in $L^2(\mathbb{R})$. Therefore, $f_n(x)=n^{1/4}e^{-(ny)^2/2}$ tends to $0$ in $L^2(\mathbb{R})$ as $n\rightarrow\infty$, even though $f_n(0)\rightarrow\infty$. It's not hard to modify the argument starting with a function $g(x) \in C_c^\infty$ instead of $e^{-x^2}$.
$\left(\mathbb{Z}/p\mathbb{Z},\times\right)\cong\left(\mathbb{Z}/\left(p-1\right)\mathbb{Z},+\right)$?
It was mentioned in the comments above that $\mathbb{Z} / p\mathbb{Z}$ is not a group under multiplication because $0$ does not have an inverse. Now, every other element does have an inverse. So let $G = \mathbb{Z} / p\mathbb{Z} - \{0\}$. You can show that this is a group under multiplication. In fact, you can show that this is cyclic. If you know this and you know that $\mathbb{Z} / (p-1)\mathbb{Z}$ is cyclic of order $p-1$, then you know they are isomorphic because there is (up to isomorphism) only one cyclic group of order $n$ for each natural number $n$. The only thing about the above that requires a bit of work is showing that $G$ is cyclic.
Prove that $a^2+b^2+c^2\geqslant\frac{1}{3}$ given that $a\gt0, b\gt0, c\gt0$ and $a+b+c=1$, using existing AM GM inequality
HINT: You can use your idea of squaring $a+b+c$, but also note that $\color{blue}{ab+bc+ca \le a^2 + b^2 + c^2}$, which you can prove with the help of AM-GM. (Hint for proving this: the AM-GM inequality tells us what about $a^2 + b^2, b^2+c^2$ and $c^2+a^2$?) One more hint (based on a suggestion from user qsmy): let $x = a^2+b^2+c^2$ and $y = ab+bc+ca$. Squaring both sides of $a+b+c=1$ gives $x+2y=1$, and the blue inequality is $x\geq y$. Can you see it now?
Check Christoffel symbol defines Levi-Civita connection
Too lengthy for a comment... I'm not sure I understand the question in your comment. A vector field, in coordinates, is given by $V= V^i \frac{\partial}{\partial x^i}$, and if you change coordinates you get, by the chain rule, $V^i \frac{\partial y^j}{\partial x^i}\frac{\partial}{\partial y^j}$ (summation convention is used). From this calculation people use to say that this is how a vector field transforms, i.e. $$ \tilde{ V}^k = V^i \frac{\partial y^k}{\partial x^i} $$ Now the point is, that the partial derivatives of the $V^i$ do not transform that way (they depend on the coordinate system or chart). The components of covariant derivative do. For this you need to know how the Christoffel symbols transform and then compute to see that the components of $D_V X$ transform correctly...this is, again, nothing else but plugging the definitions and the application of the chain and product rule. It looks terrifying and complicated but it is, in fact, just a straightforward though lengty calculation. If you want to see written down explicitly Spivaks 'Comprehensive Intrdoction to Differential Geometry' (2nd volume and Chapter 9 of volume 1, if I recall correctly) is a good source for this.
Distribution Technique Question of two independent Exponential Distributions
For the sake of my comfort, I will use $X$ and $Y$ instead of $X_1$ and $X_2$. We want the cumulative distribution function of $Z=\frac{X}{X+Y}$. Let $z$ be between $0$ and $1$. We want $\Pr(Z\le z)$, which is the probability that $X\le z(X+Y)$, or equivalently the probability that $Y\ge \frac{1-z}{z}X$. So we want to calculate the probability of landing in the part of the first quadrant that is above the line $y=\frac{1-z}{z}x$. This is the key geometric fact. Integrate first with respect to $y$, where $y$ goes from $\frac{1-z}{z}x$ to $\infty$. There is some cancellation, and we get $e^{-x/z}$. Now integrate with respect to $x$, from $0$ to $\infty$. An antiderivative is $-ze^{-x/z}$, so the integral is $z$.
Show that $ x\cdot\cos(x)+\sin(x)/2=\sum_{n=2}^\infty (-1)^n\cdot\frac{2n}{n^2-1}\cdot\sin(nx)$ when $x\in [-\pi,\pi]$
Hint: $$\dfrac{2n\sin nx}{(n+1)(n-1)}=\dfrac{\sin nx}{n-1}-\dfrac{\sin nx}{n+1}$$ Now $(-1)^n\dfrac{\sin nx}{n-1}$ is the imaginary part of $$\dfrac{(-1)^ne^{inx}}{n-1}=e^{ix}\cdot-\dfrac{(-e^{ix})^{n-1}}{n-1}$$ Now $$\sum_{n=2}^\infty-\dfrac{(-e^{ix})^{n-1}}{n-1}=\ln(1+e^{ix})=\ln (e^{ix/2})+\ln\left(2\cos\dfrac x2\right)=(2n\pi+\dfrac x2)i+\ln\left(2\cos\dfrac x2\right)$$ Put $n=0$ to find the principal value Similarly for $$\dfrac{\sin nx}{n+1}$$ Finally use How to prove Euler&#39;s formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?
Computing the Singular Homology of the pair $(D^n,S^{n-1})$
If $X$ is a space, $H_0(X)$ is a free Abelian group consisting of a direct sum of one copy of $\Bbb Z$ for each path component $C$ of $X$. This $\Bbb Z$ has a canonical generator which I write as $[C]$. In the map $H_0(X)\to H_0(Y)$ induced by $f:X\to Y$, $[C]$ maps to $[D]$ where $D$ is the path component of $Y$ containing $f(C)$. Here $S^0$ has two path components and $D^1$ has one. Both the canonical generators of $H_0(S^0)$ map to the canonical generator of $H_0(D^1)$. Your map $i_0^*$ is therefore surjective.
root testing a series
It can be shown that $$\liminf \frac{a_{n+1}}{a_n}\le \liminf \sqrt[n]{a_n}\le \limsup \sqrt[n]{a_n}\le \limsup\frac{a_{n+1}}{a_n}$$ therefore convergence by ratio test implies convergence by root test but not viceversa. Refer also to the related: Theorem 3.37 in Baby Rudin: $\lim\inf\frac{c_{n+1}}{c_n}\leq\lim\inf\sqrt[n]{c_n}\leq\lim\sup\sqrt[n]{c_n}\leq \lim\sup\frac{c_{n+1}}{c_n}$ For the series we can use that $$\sum_{n=3}^\infty \left(\frac{1}{2}\right)^{\lceil\log_2 n\rceil-1} =\sum_{n=1}^\infty \,\sum_{k=2^{n}+1}^{2^{n+1}} \left(\frac{1}{2}\right)^{n} =\sum_{n=3}^\infty(2^{n}-1)\left(\frac{1}{2}\right)^{n}$$
Examples of faithfully flat modules
Formal properties The tensor product of two faithfully flat modules is faithfully flat. If $M$ is a faithfully flat module over the faithfully flat $A$-algebra $B$, then $M$ is faithfully flat over $A$ too. An arbitrary direct sum of flat modules is faithfully flat as soon as at least one summand is. (But the converse is false: see caveat below.) Algebras An $A$-algebra $B$ is faithfully flat if and only if it is flat and every prime ideal of $A $ is contracted from $B$, i.e. $\operatorname{Spec}(B) \to\operatorname{Spec}(A)$ is surjective. If $A\to B$ is a local morphism between local rings, then $B$ is flat over $A$ iff it is faithfully flat over $A$. Caveat fidelis flatificator a) Projective modules are flat, but needn't be faithfully flat. For example $A=\mathbb Z/6=(2)\oplus (3)$ shows that the ideal $(2)\subset A$ is projective, but is not faithfully flat because $(2)\otimes_A \mathbb Z/2=0$. b) A ring of fractions $S^{-1}A$ is always flat over $A$ and never faithfully flat [unless you only invert invertible elements, in which case $S^{-1}A=A$]. c) The $\mathbb Z$-module $\oplus_{{{\frak p}}\in \operatorname{Spec}(\mathbb Z)} \mathbb Z_{{\frak p}}$ is faithfully flat over $\mathbb Z$. All summands are flat, however none is faithfully flat.
Is the standard definition of tensor product an effective way to introduce it?
For two modules $A,B$ over a ring $R$ you can define the tensor product to be all linear combinations of elements of the form $a\otimes b$ for $a\in A$ and $b\in B$ satisfying the conditions: i) $ra\otimes b=r(a\otimes b)=a\otimes rb$ for all $r\in R$, $a\in A$, and $b\in B$ ii) $(a+a')\otimes b=(a\otimes b)+(a'\otimes b)$ for all $a,a'\in A$ and $b\in B$ (and similar for all $b,b'\in B$) For more info, see this link: https://en.wikipedia.org/wiki/Tensor_product#The_definition_of_the_abstract_tensor_product
Prove $\left(1-\epsilon\right)^x\leqslant\left(1-\epsilon x\right)$ and $\left(1+\epsilon\right)^x\leqslant\left(1+\epsilon x\right)$
see the 'generalization' section in here: http://en.wikipedia.org/wiki/Bernoulli%27s_inequality
Why is the kernel of the connection one form a connection on a principal bundle?
I've just started studying this things so there are some things which aren't quite clear to me too but I'll try to give you some ideas: $(i)$ $H_p\cap V_p=\{0\}$. Recall we have an isomorphism $V_p\longrightarrow \mathfrak{g}$. If $v\in V_p$ is non-zero then $\omega_p(v)\neq 0$ for the image of $v$ by $\omega_p$ coincides with the image of $v$ by the isomorphism $V_p\longrightarrow \mathfrak{g}$. This shows $$V_p\setminus\{0\}\cap H_p=V_p\setminus\{0\}\cap \textrm{Ker}(\omega_p)=\phi\Rightarrow V_p\cap H_p=\{0\}.$$ $(ii)$ Hint for showing $T_pP=H_p+V_p$: Take $X\in T_p P$ and associate a vertical vector field $\tilde{X}$ on $P$. Then $\tilde{X}_p\in V_p$. Since $$X=(X-\tilde{X}_p)+\underbrace{\tilde{X}_p}_{\in V_p},$$ it suffices showing $X-\tilde{X}_p\in \textrm{Ker}(\omega_p)$. I'll try to improve the proof later.
$f$ is strictly increasing and $g$ is decreasing. How to find whether $f \circ g$ and $g\circ f$ are increasing or decreasing?
It is not necessary to produce a contradiction. When $x&lt;y$ then $g(x)\geq g(y)$ and therefore $f\bigl(g(x)\bigr)\geq f\bigl(g(y)\bigr)$. This already proves that $f\circ g$ is decreasing. You cannot hope for more: It could be that $g(x)=g(y)$, so that we obtain an instance of $x&lt;y$ and $f\circ g(x)=f\circ g(y)$. Similarly for $g\circ f$.
Finding out the number of special elements in $X$
If $(a_1+\sqrt{5}ib_1)(a_2+\sqrt{5}ib_2)=1$ then $|a_1+\sqrt{5}ib_1|^2|a_2+\sqrt{5}ib_2|^2=1$, that is $(a_1+5b_1^2)(a_2^2+5b_2^2)=1$. Now each of $a_j^2+5b_j^2$ is a nonnegative integer.
Is this statement meaningful if one of the elements is undefined?
Hint: $$\max\left\lbrace a,b\right\rbrace=\dfrac{a+b+|a-b|}{2}$$ If you don't know $b$ then you can't determine $|a-b|$ and the rest of the formula ;)
The closure of $(0,1)$ in the lower-limit topology on $\mathbb{R}$
Well, it's relatively easy to see that $[0,1)$ is closed. This is because $[1,\infty)$ and $(-\infty, 0]$ are open sets, and ther complement, $[0,1)$, is therefore closed. That said, there are two things you may be tempted to conclude from this which are simply NOT true: Just because every set of the sub-basis are closed sets, that does not mean that all open sets of the topology are both open and closed. This is because a union of open sets is always open, but a union of closed sets may not be closed. Just because $[0,1)$ is a closed set, that does not mean that it is the closure of $(0,1)$. The closure is the smallest closed set that contains $(0,1)$. This means that the closure of $(0,1)$ will be a subset of $[0,1)$, but a superset of $(0,1)$, meaning that the closure of $(0,1)$ may be either $(0,1)$ (if the set is closed) or $[0,1)$ if it is not.
Number of integers covered
For $m=2$ this is one more (for the $0$) than the number of distinct products $s \cdot t$ with $s,t \in \{1,2,\ldots,n\}.$ The latter number is for $n=1,2,...$ the sequence $1,3,6,9,14,18$ which certainly no simple formula is known to produce. It is discussed as an o.e.i.s. sequence A027424. For larger $m$ it would seem unlikely for there to be a closed formula, since even for $m=2$ there doesn't seem to be a known one. However note one of the links from the o.e.i.s. is to a paper discussing the more general case of an arbitrary number of factors. (Unfortunately that link only lets one see an outline, and refers to yet another site where the final paper might be found.) Added note about the second part of the question about $a_{ij}$ choice: It may not be optimal, but choosing $a_{ij}=(-1)^{j+1}$ certainly brings the count down. Suppose $n=2k$ is even. Then the sums for a fixed row $j$ of the terms $a_{ij}x_{ij}=(-1)^{j+1}x_{ij}$ have as possible values the integers in the interval $[-k,k].$ If we omit the zero choice here (later one can add $1$ to the sum) then the possible values of the product are each up to a sign of $\pm 1$ the nonzero values of the original unaltered sum for parameters $(m,n)=(m,k).$ To state this more clearly, let $1+S(m,n)$ be the number covered as in the post, while $1+T(m,n)$ the number covered with our choice of coefficients $a_{ij}.$ Then what we have is $T(m,2k)=2 \cdot S(m,k).$ Even for $m=2$ this gives a significant decrease for the altered sum, comared to the unaltered one.
Boundary is a union of orbits with strictly lower dimension
Let $x\in X$ with $Y=G\cdot x$. There is a subset $U\subseteq Y$, which is open and dense in $\bar{Y}$, so you can write the orbit as union of open sets $$G\cdot x=\bigcup_{g\in G} g\cdot U, $$ where $g\cdot U$ is open as isomorphic image of an open set. $\overline{G\cdot x}\setminus G\cdot x$ is a closed subset and every irreducible component of it is a proper subset of a irreducible component of $\overline{G\cdot x}$, because $G\cdot x$ is dense in $\overline{G\cdot x}$. So by definition of krull dimension it has stricly lower dimension.
Lebesgue density of the random vector $(X,X+Y)$ where $X$ and $Y$ are exponentially distributed to different parameters
You have to assume independence between $X$ and $Y$ then you can use the fundamental transformation theorem (Jacobian). For our knowledge, you have also to tell us if $\alpha$ and $\beta$ are &quot;scale&quot; or &quot;rate&quot; parameter because this modifies the density expression Let's set $$\begin{cases} v=x+y\\ u=x \end{cases}\rightarrow \begin{cases} x=u\\ y=v-u \end{cases}$$ The Jacobian is evidently $|J|=1$ thus $$f_{UV}(u,v)=\alpha e^{-\alpha u}\beta e^{-\beta(v-u)}=\alpha \beta e^{-(\alpha-\beta)u}e^{-\beta v}$$ The only difficulty now is to understand which is the $(U,V)$ support. It is quite easy to understand that it must be $0&lt;u&lt;v&lt;\infty$ thus the requested joint density is $$f_{UV}(u,v)=\alpha \beta e^{-(\alpha-\beta)u}e^{-\beta v}\cdot\mathbb{1}_{[0;\infty)}(u)\cdot\mathbb{1}_{[u;\infty)}(v)$$ Useful observation Now let's set $\alpha=\beta$ thus we have $$f_{UV}(u,v)=\alpha^2 e^{-\alpha v}\cdot\mathbb{1}_{[0;\infty)}(u)\cdot\mathbb{1}_{[u;\infty)}(v)$$ Which integrated in $du$ gives us $$f_V(v)=\int_{0}^v \alpha^2 e^{-\alpha v}du=\alpha^2 v e^{-\alpha v}\sim Gamma(2;\alpha)$$ as already known by Gamma distribution properties.
Upperbound for Covariance Matrix
It follows from the spectral measure representation for the covariance matrix. In a nutshell, you have, $C_h=\int A(t)e^{iht}\,d\mu(t)$ where $A(t)$ are positive semidefinite and $\mu$ is some non-negative scalar measure on $[-\pi,\pi]$. Then ${\rm trace}[C_h^*C_h]=\Re\iint e^{ih(t-s)}{\rm trace}[A(t)A(s)]\,d\mu(t)d\mu(s)$. However ${\rm trace}[A(t)A(s)]={\rm trace}[A(t)^{1/2}A(s)A(t)^{1/2}]\ge 0$ for all $t,s$.
Limit of sum of exponential functions under root
$$\lim_{x\to \infty}\left(4\times6^x-3\times10^x+8\times15^x\right)^{1/x}=\lim_{x\to \infty}(15^x)^{1/x}\left(4\times\frac{6^x}{15^x}-3\times \frac{10^x}{15^x}+8\right)^{1/x}$$ and we have $$(15^x)^{1/x}\left(4\times\frac{6^x}{15^x}-3\times \frac{10^x}{15^x}+8\right)^{1/x}&lt;15(4\times 1-3\times 0+8)^{1/x}\to 15$$ $$(15^x)^{1/x}\left(4\times\frac{6^x}{15^x}-3\times \frac{10^x}{15^x}+8\right)^{1/x}&gt;15(4\times 0-3\times 1+8)^{1/x}\to 15$$ Now apply squeeze theorem.
Spivak Calculus on Manifolds Exercise 2-9
From the comments above. Notice that $$\lim_{h \to 0} \frac{f(a+h) - a_0 - a_1 h}{h}=0 \implies \lim_{h \to 0} h \cdot \left( \frac{f(a+h) - a_0 - a_1 h}{h} \right)=0.$$ But the second limit is just $$ \lim_{h \to 0} f(a+h) - a_0 - a_1 h = f(a) - a_0, $$ so we get $f(a) = a_0$. Also, do note that we assume the continuity of $f$ at $a$ when we calculate the above limit. This is a minor error in part (a) of this problem, since the continuity of $f$ at $a$ is not assumed in the problem as it is stated.
Compute$\sum_{k=2}^{n+5}\frac {1}{k(k-1)}\binom {n+1}{k-2}2^k$
Note that $$\begin{align*} \frac1{k(k-1)}\binom{n+1}{k-2}&amp;=\frac{(n+1)!}{k(k-1)(k-2)!\big((n+1)-(k-2)\big)!}\\ &amp;=\frac{(n+1)!}{k!(n+3-k)!}\\ &amp;=\frac1{(n+3)(n+2)}\cdot\frac{(n+3)!}{k!(n+3-k)!}\\ &amp;=\frac1{(n+3)(n+2)}\binom{n+3}k\;, \end{align*}$$ so $$\sum_{k=2}^{n+5}\frac1{k(k-1)}\binom{n+1}{k-2}2^k=\frac1{(n+3)(n+2)}\sum_{k=2}^{n+5}\binom{n+3}k2^k\;.$$ Now use the binomial theorem and make a couple of adjustments, a bit like what you did in the previous problem.
Eulerian and hamiltonian graph
Your answers are not entirely correct. Few corrections and comments following: Every graph (unless perhaps the empty graph) has subgraphs. Plenty of them. The graph on the picture has more than 8 subgraphs. The graph is clearly Hamiltonian (can you find a 8-cycle in it?) The graph indeed contains a spanning tree. In fact every connected graphs contains a spanning tree
Equidecomposable examples
A simple example: $[0,4)\times[0,1)$ and $[0,2)\times [0,2)$ are equidecomposable in $\Bbb R^2$, because both can be written as four translated copies of $[0,1)\times [0,1)$. Another example in $\Bbb R^1$: Let $A$ be any subset of $[-1,0)$. Then $S:=A\cup [0,\infty)$ and $T:=[0,\infty)$ are equidecomposable. To see this, let $Q_1=\bigcup_{n\ge 1}(A+n)$ and $Q_2=[0,\infty)\setminus Q_1$. Then $T$ is decomposable in $Q_1,Q_2$ per $\phi_1(x)=\phi_2(x)=x$, and $S$ is decomposable in $Q_1,Q_2$ per $\phi_1(x)=x-1$ and $\phi_2(x)=x$. Using the axiom of choice, one can show that somewhat surprisingly the sphere $S_2\subset \Bbb R^3$ is equidecomposable with two disjoint copies of it, say $S_2\cup (S_2+(2,0,0))$.
The integral with respect to the Riesz measure over a compact with empty interior
It's been a while since I've studied this stuff, but I think Riesz measures can be point masses (i.e. there are subhamonic $u$ with $\Delta u = \delta_{x_0}$ in the sense of distributions). So taking $F$ to be that point immediately disproves your proposed equality, since integrating over a set of Lebesgue measure $0$ gives $0$.
Sprage-Grundy function periodicity for finite substraction games
Since the set $S$ is finite, for each $x$ and $y$ we have $|\{g(y) : y \in F(x)\}|\le |S|$, so $g(x)=\operatorname{mex}\{g(y) : y \in F(x)\}\le |S|+1$. For each $x&gt;\max S$ value of $g(x)$ is completely determined by values of $g(y)$ at the segment $[x-\max S, x-1]$. Since there are only finitely many ways to fill a discrete segment of length $\max S$ by numbers not bigger than $|S|+1$, by pigeonhole principle there exists natural $n_0$ and $t$ such that $g(n_0+i+t)=g(n_0+i)$ for each $0\le i\le |S|+1$. Then $g(n_0+i+t)=g(n_0+i)$ for each natural $i$.
How to find primitive point on an elliptic curve?
The problems of determining if a group given by reduction of an elliptic curve over a prime $p$ is cyclic, and if so, finding a point $P$ that generates (a "primitive point"), have been extensively investigated. An obvious "brute force" strategy would set a coordinate to successive values $0,1,\ldots,p-1$ and check if the number of points on the reduced elliptic curve matches the order of some point found in this way. This method gives an $O(mp)$ order of complexity, where $m$ is group order. The literature shows that the first order dependence on $p$ in searching for primitive points can be lowered to a deterministic $O(p^{0.5+\epsilon})$. To discuss the algorithms involved we first recap the framework in which these computations are carried out. Let $E$ be an elliptic curve given by an equation in "Weierstrass form": $$ y^2 = x^3 + Ax + B \text{ where } A,B \in \mathbb{Z} $$ The rational points on this curve, together with a "point at infinity" $\mathcal{O}$ where all vertical lines intersect, form a geometrically defined abelian group. That is, a straight line through any two rational points on the curve should intersect the curve in a third point, and with a bit of algebraic reasoning (using the integer coefficients of the equation), it is deduced that this third point is also rational. The result of the group operation on the first two points is given by reflecting the third point in the $x$-axis (about which the curve above is symmetric). By a slight abuse of notation we will refer to this group as $E(\mathbb{Q})$, also taking this to mean the rational points on $E$ together with point $\mathcal{O}$, which works out to be the identity element for this abelian group. There is a famous result, the Mordell-Weil theorem, which says this group is finitely generated. That is: $$ E(\mathbb{Q}) = E(\mathbb{Q})_{tors} \times \mathbb{Z} \times \mathbb{Z} \ldots \times \mathbb{Z} $$ where the first factor, the torsion subgroup $E(\mathbb{Q})_{tors}$, consists of all the group elements of finite order in $E(\mathbb{Q})$, and the number $r$ of the remaining torsion-free factors (copies of $\mathbb{Z}$) is the rank of $E(\mathbb{Q})$. A similar construction can be carried out using coordinates from prime field $\mathbb{F}_p$. We consider this here only for odd primes $p$ that do not divide the discriminant $\Delta$ of the cubic: $$ \Delta = -16(4A^3 + 27B^2) $$ Such primes give "good reduction" in that any rational point $P=(x,y)$ on $E$ can be mapped to a point $\tilde{P} = (\tilde{x},\tilde{y}) \in \mathbb{F}_p \times \mathbb{F}_p$ when neither $x,y$ has least denominator divisible by $p$, or to $\mathcal{O}$ otherwise (as then both $x,y$ must have least denominator divisible by $p$. This "reduction modulo $p$" gives a well-defined group homomorphism $E(\mathbb{Q}) \to \tilde{E}(\mathbb{F}_p)$, the details of which verification are touched upon in this 2003 writeup by M. Woodbury. This homomorphism need not be surjective. While $\tilde{E}(\mathbb{F}_p)$ is not necessarily cyclic, it can always be generated by two elements! This perhaps addresses u_seem_surprised's Comment that asks about the possibility of "more than 1 cyclic group". In any case there are generators $P,Q$ with $M=|P|$ a multiple of $L=|Q|$ so that: $$ \tilde{E}(\mathbb{F}_p) \cong \mathbb{Z}/M \times \mathbb{Z}/L $$ whose order is $N=ML$, and the exponent of this group is $M$. Let's take the example mentioned in the Comments on the Question, the reduction of: $$ y^2 = x^3 + 2x + 2 \;\; \text{ over } \;\; \mathbb{Z}/17 $$ and take the "brute force" path, informed by the material above. The points on the reduced elliptic curve are $(x,\pm y)$ where $x$ gives a right hand side value that is a quadratic residue (and $y$ a corresponding square root), as well as $\mathcal{O}$, the "point at infinity". The set of quadratic residues mod $17$ can be found by simply squaring $0,1,\ldots,8$ mod $17$, or if one feels more energetic by using the law of quadratic reciprocity, or by Googling it if one feels less energetic. With those in hand, I built a quick spreadsheet to evaluate the cubic $x^3 + 2x + 2$ at $x=0,1,\ldots,16$ mod $17$. Checking these results against the list of quadratic residues and their roots gives these $19$ points: $$ (0,\pm 6), (3,\pm 1), (5,\pm 1), (6,\pm 3), (7,\pm 6), (9,\pm 1), (10,\pm 6), (13,\pm 7), (16,\pm 4), \mathcal{O} $$ Since the prime order $19=ML$ factors with $L|M$ only for $M=19,L=1$, all the points except $\mathcal{O}$ are primitive (i.e. each has order $19$ except the identity). So $(5,1)$ is a primitive point (but so are most of them). This previous Math.SE Question asks how to tell if the same curve $E$ reduced over the same prime $p=17$ has as primitive point $(0,6)$ instead of $(5,1)$, which of course we know it does. Another Math.SE Question asks about determining cyclicity for same curve $E$ reduced over a different prime, $p=11$. With nine points on the curve, it requires some checking of points to see if any have order $9$ (because this is no longer a prime order). But such a primitive point is found, so the group is cyclic. Finally an example of an elliptic curve giving a non-cyclic group is in Silverman's tutorial, about a third of the way through (page numbered 29). Take the curve $y^2 = x^3 - 5x + 8$ reduced over prime $p=37$. Plugging and chugging much as we did above, one finds 45 points (including $\mathcal{O}$). Checking how many points there are whose orders divide $3$ (there are nine), it is shown that the group is $\mathbb{Z}/15 \times \mathbb{Z}/3$ rather than the cyclic group $\mathbb{Z}/45$. Reading List I realize the OP does not wish to be sent off with a long reading assignment, but I'd like to give a plug for a book and a few papers: Silverman, J.H. and John Tate, Rational points on elliptic curves. Undergraduate Texts in Mathematics. Springer-Verlag, New York, 1992. Silverman, J.H. An Introduction to the Theory of Elliptic Curves. Summer School on Computational Number Theory and Applications to Cryptography. University of Wyoming, 2006. 89 pp. Kohel, D.R. and Igor E. Shparlinski, On Exponential Sums and Group Generators for Elliptic Curves over Finite Fields. Proc. Algorithmic Number Theory Symposium, Leiden, 2000, Lect. Notes in Comp. Sci., Springer-Verlag, Berlin, 2000, v.1838, 395-404.
Perpendicular tangents of parabola and possible functions g(x)
First, let's find the slope of the tangent line to $f(x)$ at $x = a$. The derivative of $x^2$ is $2x$, so the slope of the tangent line is $2a$. Because the two tangent lines are perpendicular, the slope of the tangent line to $g(x)$ is $-\frac1{2a}$. Basically, we want to find all the functions $g(x)$ can be for its derivative at $a$ to be $-\frac1{2a}$. So just take the antiderivative of $-\frac1{2a}$. We can replace $a$ with $x$ to get $-\frac12(x)^{-1}$, and the antiderivative is $$-\frac{\ln{x}}{2}$$ Adding $C$ as a constant (moving a function up and down will not change its derivative), we find that $$-\frac{\ln{x}}{2}+C$$ is the correct answer.
Is there any other function satisfying the system of equations involving integration?
You need to have more than just $f(x)$ being differentiable etc. For example it needs to be polynomial or in form of $e^{g(x)}$. The problem is that you can create whatever function $f(x)=f(a,b,c,x)$ where $a, b, c$ are parameters and then your conditions would require solving $a,b,c$. Take out of nowhere $f(x)=ax^5+bx^2+c$, it might create a needed system. Since there are more solutions, the only way that this would work is to prove that the final integral is for some reason unique. To refute that it is sufficient to find two examples that give different integral and I am sure that you will find it as soon as you start looking into all possible forms the function can have.
Polynomial rings, division algorithm: $\, x^m-1\bmod x^n-1$
Hint $\rm\ mod\,\ x^{\Large n}\!-1\!:\ x^{\Large n}\equiv 1,\ \ so\ \ x^{\Large m}\!-1 \equiv x^{\Large m\ mod\ n}\!-1 \equiv 0 \!\iff\! m\ mod\ n = 0 \!\iff\! n\mid m$ Remark $\ $ One can go further. The polynomial sequence $\rm\ f_n = (x^n-1)/(x-1),\, $ jut like the Fibonacci sequence, is a strong divisibility sequence, i.e. $\rm\: (f_m,f_n)\: =\: f_{\:(m,n)}.\,$ The proof is simple - essentially the same as the proof of the Bezout identity for integers - see my post here. We can view the polynomial Bezout identity as a q-analog of the integer Bezout identity, e.g. let's compare the Bezout identity for the gcd $\rm\ \color{#c00}3 = (\color{#0a0}{15},\color{blue}{21})\ $ in polynomial and integer form: $$\rm\displaystyle \color{#c00}{\frac{x^3-1}{x-1}}\ =\ (x^{15} + x^9 + 1)\ \color{#0a0}{\frac{x^{15}-1}{x-1}}\ -\ (x^9+x^3)\ \color{blue}{\frac{x^{21}-1}{x-1}}$$ for $\rm\ x = 1\ $ this specializes to $\ \color{#c00}3\ =\ (3)\ \color{#0a0}{15}\ -\ (2)\ \color{blue}{21},\, $ the integer Bezout identity for the gcd. It is well-worth studying these binomial divisibility properties since they occur quite frequently in number theoretical applications. Moreover, they provide excellent motivation for the more general study of divisibility theory, $ $ esp. in divisor theory form. For an introduction see Borovich and Shafarevich: Number Theory.
Supremum and infimum of a given set without using limits or differentiation
Consider two separate cases. The first is $x \ge 1$ and the second is $x &lt;1$. If $x \ge 1$, then $$y=x + |x-1| = x+x-1 = 2x-1$$ Clearly this has no upper bound, while the lower bound is $y=1$ (when $x=1$). If $x&lt;1$, then $$y=x + |x-1| = x-x+1 = 1$$ In other words your set is $$\left\{ y \in \Bbb R | y \ge 1\right\}$$ which is the interval $[1, + \infty )$.
Ratio of circumradius to inradius of a regular tetrahedron
Assuming the $\text{centroid}\equiv\text{incenter}\equiv\text{circumcenter}$ of the regular tetrahedron $ABCD$ lies at the origin, i.e. $A+B+C+D=0$, the ratio $\frac{R}{r}$ equals $\frac{OA}{OA'}$ with $A'$ being the centroid of $BCD$. Hence $$ \frac{R}{r} = \frac{3\|A\|}{\|B+C+D\|} =\frac{3\|A\|}{\|A\|} = 3 $$ with a straightforward generalization to the regular $n$-simplex. This can be shown also by embedding $ABCD$ in $\mathbb{R}^4$ via $A\mapsto(1,0,0,0),B\mapsto(0,1,0,0),$ $C\mapsto(0,0,1,0),$ $D\mapsto(0,0,0,1)$. Metric computations in this framework are utterly simple.
The geometric object given by a positive definite symmetric matrix?
The spectral theorem is your friend as mentioned by @Ted Shifrin in the comments. Here's a slightly more general overview of such level sets. Let's work in $\Bbb{R}^n$, where we have the standard inner product \begin{align} \langle \xi,\eta \rangle:= \sum_{i=1}^k \xi_i \eta_i = \xi^t \eta = \eta^t \xi \end{align} Given any matrix $A \in M_{n \times n}(\Bbb{R})$, we can always consider the following quadratic function from $\Bbb{R}^n \to \Bbb{R}$ defined by \begin{align} \xi \mapsto \langle \xi, A\xi\rangle =\xi^t A\xi = \sum_{i,j=1}^n\xi_j A_{ij} \xi_i \end{align} and we can then investigate what its level set $S_1 := \{\xi \in \Bbb{R}^n| \, \, \langle \xi, A\xi\rangle = 1\}$ looks like. Since you tagged differential geometry, it may be interesting for you to note that if we assume $A$ is invertible then one can use the regular value theorem to quickly prove that $S_1$ is a smooth $(n-1)$-dimensional submanifold of $\Bbb{R}^n$. Now, let's only assume that $A$ is a symmetric matrix. Then by the Spectral theorem for real matrices, the matrix $A$ can be orthogonally diagonalized, i.e there exists a diagonal matrix $\Lambda \in M_{n \times n}(\Bbb{R})$ (with diagonal entries denoted $\lambda_1, \dots, \lambda_n$), and a matrix $P \in M_{n \times n}(\Bbb{R})$ which is orthogonal ($P^tP = PP^t = I$, so that $P^{-1} = P^t$) such that \begin{align} A = P\Lambda P^{-1} = P \Lambda P^t. \end{align} So, now we can write the level set $S_1$ as follows: \begin{align} S_1 &amp;= \left\{\xi \in \Bbb{R}^n| \, \left \langle \xi, A \xi\right \rangle = 1\right\} \\ &amp;= \left\{\xi \in \Bbb{R}^n| \, \left \langle \xi, P \Lambda P^t \xi\right \rangle = 1\right\} \\ &amp;= \left\{\xi \in \Bbb{R}^n| \, \left \langle (P^t\xi), \Lambda (P^t \xi)\right \rangle = 1\right\} \\ &amp;= \left\{\eta \in \Bbb{R}^n| \, \left \langle \eta, \Lambda \eta \right \rangle = 1\right\}, \end{align} (where the last equality follows because $P^t$ is an invertible matrix). So, now, let's write out this last condition more explicitly: $\left \langle \eta, \Lambda \eta \right \rangle = 1$ means that \begin{align} \sum_{i=1}^n \lambda_i \eta_i^2 &amp;= 1 \tag{$*$} \end{align} Let's now investigate this condition in several different cases. Suppose that $A$ is symmetric and positive definite. The positive-definiteness is equivalent to saying that all the eigenvalues $\lambda_1, \dots, \lambda_n$ of $A$ are strictly positive. In this case, we can define $a_i := \dfrac{1}{\sqrt{\lambda_i}}$, so that $\lambda_i = \dfrac{1}{a_i^2}$, and hence \begin{align} 1 &amp;= \sum_{i=1}^n \lambda_i \eta_i^2 \\ &amp;= \sum_{i=1}^n \dfrac{\eta_i^2}{a_i^2} \end{align} In this form, the equation may be more recognizable as an $(n-1)$-dimensional ellipsoid in $\Bbb{R}^n$; compare with the familiar equation for an ellipsoid in $\Bbb{R}^3$: \begin{align} \dfrac{x^2}{a^2} + \dfrac{y^2}{b^2} + \dfrac{z^2}{c^2} = 1 \end{align} I think that should have answered your question, but let me just go on, because I like this question :) Notice that if all the eigenvalues $\lambda_i$ are equal (equivalently all the $a_i$'s are equal), then the result is an $(n-1)$-dimensional sphere centered at the origin, having radius $a := a_1 = \dots = a_n$. If we now change hypotheses to: $A$ being symmetric, and invertible (so all the eigenvalues are non-zero) but if we do not make the positive-definiteness assumption, then it means the eigenvalues could have possibly different sign. So, in this case, the resulting level set (assuming it is not empty) will be a certain $(n-1)$-dimensional hyperbola. So really, the take home message is that given a symmetric (and invertible so that everything is nice) matrix $A$, the level set of the quadratic function $\xi \mapsto \langle \xi, A\xi\rangle$ (if not empty) is really an $(n-1)$-dimensional ellipse/hyperbola in disguise. The role of the spectral theorem is to find an orthogonal matrix $P$ (which by reordering its columns if necessary, we may assume has $\det(P) = +1$), which simply means that we should rotate our coordinate system from the $\xi$ coordinares (in your case $\xi = (x,y,z)$) to a new $\eta$ coordinate system where everything is simple and obvious.
Sum of probabilities versus probability of sum
We have $$\{|X|+|Y|\geqslant 2\varepsilon^2\}\subset\{|X|\geqslant\varepsilon^2\}\cup \{|Y|\geqslant\varepsilon^2\}, $$ which yields \begin{align} \mathbb P(|X|+|Y|\geqslant 2\varepsilon^2) &amp;\leqslant \mathbb P(\{|X|\geqslant\varepsilon^2\}\cup \{|Y|\geqslant\varepsilon^2\})\\ &amp;\leqslant \mathbb P(|X|\geqslant\varepsilon^2)+\mathbb P(|Y|\geqslant\varepsilon^2). \end{align}
Name of theorem that random deviations in $n$ dimensional space tend to normal vector?
This is a result of measure concentration on the sphere. Levy's theorem tells you: Suppose $f:S^{n-1}\to\mathbb{R}$ has $$\lVert f\rVert_{Lip}=\sup\left\{\frac{|f(x)-f(y)|}{d(x,y)};\;x,y\in S^{n-1}\right\}\leq L,$$ then $$\mathbb{P}\left(|f-\mathbb{E}f|\geq t\right)\leq 4\exp\left(-cn\frac{t^2}{L^2}\right).$$ The special case that you need is just $f(y)=y\cdot t$, where $t$ is the tangent to $l$ (Lipschitz norm 1). To prove this you don't need the full power of the above theorem, it is enough to use Maxwell's observation: Let $X=(X_1,\ldots,X_n)$ be a uniform vector in $S^{n-1}$, then for any $\theta\in S^{n-1}$ the random variable $X\cdot\theta$ is close in distribution to $G/\sqrt{n}$, where $G$ is a standard Gaussian. To prove this, observe that the distribution of $X\cdot\theta$ is proportional to $(1-t^2)^{(n-3)/2}$ where $-1\leq t\leq 1$, and by Taylor approximation, when $n$ is large, it is close to $e^{-t^2n/2}$. (This is actually part of the proof of the theorem I wrote here). To get the density function you can use the following: Assume $X\cdot \theta=X_1$ (by rotation invariance it is all the same), and note that if you fix the first coordinate to be $t$, then the other coordinates are distributed uniformly on a sphere of radius $\sqrt{1-t^2}$. Hence you can do a change of variables $[-1,1]\times S^{n-2}\to S^{n-1}$ by $$(t,y)\mapsto (t,\sqrt{1-t^2}y).$$ Since the surface area of the lower dimensional sphere change with scaling by $(1-t^2)^{(n-2)/2}$, and the gradient of the above map adds a factor of $(1-t^2)^{-1/2}$ you get the correct density. In recent years we got a lot of good new books about geometric functional analysis. For example: Asymptotic Geometric Analysis P1 (Artstein-Avidan, Giannopoulos, Milman). High Dimensional Probability (Vershynin). Alice and Bob Meet Banach (Aubrun, Szarek). There are many other books, these are new and very accessible.
$g(x) = f(x,0)$ e $h(y) = f(0,y)$. If $x = 0$ is local minimum of $g$ and $y = 0$ is local minimum of $h$, then $(0,0)$ is local minimum of $f$.
Let us consider $f:\mathbb{R}^2 \to \mathbb{R},~ f(x,y):=-xy$. Then $h=g\equiv 0$, but $f$ has no local minimum in $(0,0)$.
Calculating limits the easy way
I believe two of the more useful rules for limit calculations apply to algebraic operations within an expression. In particular, $$\lim_{x \rightarrow a} f(x) + g(x) = \lim_{x \rightarrow a} f(x) + \lim_{x \rightarrow a} g(x).$$ And $$\lim_{x \rightarrow a} f(x)g(x) = \left(\lim_{x \rightarrow a} f(x)\right)\left(\lim_{x \rightarrow a} g(x)\right),$$ As long as limits on the right hand side of each equation exist.
A sequence that converges pointwise but not uniformly.
This is essentially correct. In some places you use $\chi_{[0,1/n]}$ instead of $\chi_{(0,1/n]}$ -- for these functions, the sequence doesn't converge at $x=0$. You can make your functions converge pointwise on all of $[0,1]$ bounded by removing the factor of $n$: if $f_n=\chi_{(0,1/n]}$, then $f_n(x)$ converges to $0$ for all $x$. You can also make the functions continuous if you wish by replacing the drop from $n$ to $0$ (or from $1$ to $0$ as in the previous paragraph) a very short line segment, and moving your interval to somewhere in the middle of $[0,1]$, so that you can do this for both endpoints.
Fermat–Torricelli point for polygons
Not a purely geometric construction, but an efficient algorithm (due to Weiszfeld) is well-known in the literature. These notes by Nam are an excellent survey on the topic, in my opinion.
modeling with exponential distributions
Assuming that Naomi walks up to the first available clerk, we observe that Naomi cannot begin being served until one of John or Paul has already left. Therefore, $$Z &gt; \max(X,Y) - \min(X,Y)$$ captures the event that Naomi remains in the office after both John and Paul have left. Why is the answer not simply $Z &gt; \max(X,Y)$? Because Naomi's service time does not begin until the amount of time $\min(X,Y)$ has elapsed. To give an example, suppose $X = 1$ and $Y = 3$ (in minutes). Then Naomi begins to be served at time $t = 1$ minute. She will remain after both John and Paul have left if and only if her service time is longer than $2$ minutes. This is precisely $\max(X,Y) - \min(X,Y) = 3 - 1$.
Derivatives of a function
By the fundamental theorem of calculus you can say "yes". Loosely speaking, given $f$ differentiable in $[a,b]$ (be careful, I don't remember if there's some request on the fact that the interval should be open) then define $F(x) = \int_{a}^x f$. Then $\dfrac{d}{dx}F(x) = f(x)$.
Multiple linear regression inconsistency?
$\hat{\beta} = (x_1'x_1)^{-1}x_1'y$ Now substitute the true model $y=X\beta + \epsilon$ into $y$ $\hat{\beta} = (x_1'x_1)^{-1}x_1'y = \beta_1 + (x_1'x_1)^{-1}x_1'x_2\beta_2+ (x_1'x_1)^{-1}x_1'\epsilon_i$ Taking the limit sends the last term to $0$, which leaves $\hat{\beta} \rightarrow \beta_1 + \frac{cov(x_1,x_2)}{var(x_1)}\beta_2$ Which means all you need to do if figure out when $\frac{cov(x_1,x_2)}{var(x_1)}\beta_2$=0
Integration of $ \int_1^2 \frac{3x^3 + 3x^2 - 5x + 4}{3x^3 - 2x^2 +3x -2} \mathrm dx$
Note that $$\frac{3x^3 + 3x^2 - 5x + 4}{3x^3 - 2x^2 +3x -2}=\frac{3x^3 - 2x^2 +3x -2+\color{red}{5x^2-8x+6}}{3x^3 - 2x^2 +3x -2}=1+\frac{5x^2-8x+6}{3x^3 - 2x^2 +3x -2}$$ and $$3x^3-2x^2+3x-2=(3x-2)(x^2+1)$$ so $$\int_1^2\frac{3x^3 + 3x^2 - 5x + 4}{3x^3 - 2x^2 +3x -2}\,dx=[x]_1^2+\int_1^2\frac{5x^2-8x+6}{(3x-2)(x^2+1)}\,dx$$ Now can you use partial fractions?
Surjective Morphism of affine algebraic variety
If we are talking maps between affine domains. Let $A\to B$ be an inclusion of domains which are finitely generated $k$-algebras. Since dimension of affine domain is equal to its transcendent degree. And $A\to B$ will give an injection $Q(A)\to Q(B)$, we win. If just a map of finitely generated $k$-algebras, say $A\to B$ also. Since the set map $\operatorname{Spec} B\to \operatorname{Spec} A$ is surjective, it follows that the kernel of $A\to B$ consists of nilpotents. So we may assume $A\to B$ is injective. Now the dimension of $A$ is equal to its transcendental degree, it gives that $\operatorname{dim}B\geq\operatorname{dim}A$.
Does my tiling pattern based on regular pentagons have any value?
It looks like your tiles can also tile the plane periodically -- that's what happens in each of the five sectors of the plane that meet at the central pentagon. Non-periodic tilings with tiles that could also tile periodically, are unfortunately not something particularly new and remarkable. (It is simple to construct such an example with $2\times 1$ rectangles, for example).
Approximating solutions for the ODE $y'=\exp(y/x)$
Calculate the first few derivatives analytically $$y' = e^{\frac{y}{x}}$$ $$y'' = e^{\frac{y}{x}}\left(\frac{y'}{x} - \frac{y}{x^2}\right)$$ $$y''' = e^{\frac{y}{x}}\left(\frac{y'}{x} - \frac{y}{x^2}\right)^2 + e^{\frac{y}{x}}\left(\frac{y''}{x} - \frac{2y'}{x^2} + \frac{2y}{x^3}\right)$$ which gives us $y'(1) = 1$, $y''(1) = 1$ and $y'''(1) = 0$. Now Taylors theorem gives us $$y(x) = \sum \frac{y^{(n)}(1)}{n!}(x-1)^n = (x-1) + \frac{(x-1)^2}{2} + \mathcal{O}[(x-1)^4]$$ For the second part note that $$y' = x\left(\frac{y}{x}\right)' + \frac{y}{x}$$ so if the second term could be neglected close to the singularity at $x=x_0$ then the ODE would read $$\frac{d}{dx}e^{-\frac{y}{x}} = -\left(\frac{y}{x}\right)'e^{-\frac{y}{x}} \approx -\frac{1}{x}$$ which has the solution $$y \approx -x\log\left[\log\left(\frac{x_0}{x}\right)\right]$$ It remains to check that this is a good solution. We do this by calulcating $$\frac{y'}{e^{y/x}} = 1 - z \log z,~~~~\text{where}~~~~z =\log\left(\frac{x_0}{x}\right)$$ and as $x\to x_0$ we have $z\to 0$ and since $\lim_{z\to 0} z\log z = 0$ we have that our solution is a good approximation (in the sense that $\frac{y'}{e^{y/x}} \approx 1$) close to $x=x_0$ .