title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Where have I gone wrong with this complicated equation?
It has to be the limit; there is no given meaning to $\log_{n\to\infty}$. With $\log$ replaced with $\lim$, you have \begin{align} \frac4\pi\,\int_0^\infty\frac {\sin\left ( \frac {\left (\ln\sum_{n=0}^\inf \frac {x^n}{n!}\right)^2}{\ln \lim_{n\to\infty} \left (1+\frac xn\right)^n}\right)}{\frac {101x}{\sum_{i=0}^{100}i}}\,dx &= \frac{4\times5049}{101\pi}\,\int_0^\infty\frac {\sin x}{x}\,dx\\ \ \\ &= \frac{200}{\pi}\,\frac\pi2=100.\\ \ \\ \end{align}
Are the two graphs isomorphic? Find an isomorphism if true.
Here is the short answer: $a$ can't corresponds to 1. They have different numbers of edges out from the vertex!
Change of Variables - independence of variables
Yes, I think this integration is OK. Try the following thought experiment. Suppose $x$ and $\eta$ are mutually independent (so far as you are concerned). Someone came along and created $y$ from $x$ and $\eta$. Now, $x$ and $y$ are not mutually independent. They then present you with the PDE of $u$ as a function of $x$ and $y$. You then unravel this back to $u$ as a function of $x$ and $\eta$. I think the problem starts with the assumption that $x$ and $y$ are mutually independent. I find this questionable, since if they were independent, there would not have be a PDE to start with; i.e. you would not have needed a similarity transform to get to the ODE. You found a similarity transform that reduced the PDE to an ODE. You can think of this as looking for an auxiliary variable $\eta$ such that $x$ and $\eta$ are independent with respect to $u$.
A Group of Length $1$ with a Subgroup of Length $m$
What with the "standard" embedding of $\;A_4\;$ in $\;A_5\;$ ? $$1\le V\lhd A_4\le A_5\;,\;\;V:=\{\,(12)(34)\,,\,(13)(24)\,,\,(14)(23)\,\}$$
Solving these two equations
We have $$1+\frac{1}{x^2+y^2}=\frac{12}{5x}$$ and $$1-\frac{1}{x^2+y^2}=\frac{4}{5y},$$ which gives $$\frac{6}{x}+\frac{2}{y}=5$$ or $$y=\frac{2x}{5x-6},$$ which after substitution to the first equation gives $$25x^4-120x^3+209x^2-156x+36=0$$ or $$(5x^2-12x+6.5)^2-2.5^2=0$$ or $$(x-2)(5x-2)(5x^2-12x+9)=0.$$ Id est, we got the answer: $$\{(2,1),(0.4,-0.2)\}$$
Proof of the ring isomorphism: $ \mathbb{Z}[\sqrt{7}] /(5+2 \sqrt{7}) \cong \mathbb{Z} / (3) $
Consider the natural homomorphism $$\varphi \colon \mathbb{Z} \hookrightarrow \mathbb{Z}\left[ \sqrt{7} \right] \twoheadrightarrow \frac{\mathbb{Z}\left[ \sqrt{7} \right]}{\left\langle 5 +2 \sqrt{7} \right\rangle}$$ which sends any integer $n \in \mathbb{Z}$ to its class $n +\left\langle 5+ 2 \sqrt{7} \right\rangle \in \frac{\mathbb{Z}\left[ \sqrt{7} \right]}{\left\langle 5 +2 \sqrt{7} \right\rangle}$. As you said, it is enough to prove that $\text{Ker}(\varphi) = 3 \mathbb{Z}$ and $\varphi$ is surjective. We have $3 = -\left( 5 -2 \sqrt{7} \right) \left( 5 +2 \sqrt{7} \right) \in \left\langle 5 +2 \sqrt{7} \right\rangle$, and hence $3 \mathbb{Z} \subset \text{Ker}(\varphi)$. Therefore, either $\text{Ker}(\varphi) = \mathbb{Z}$ or $\text{Ker}(\varphi) = 3 \mathbb{Z}$. Now, if we had $\text{Ker}(\varphi) = \mathbb{Z}$, there would exist $a, b \in \mathbb{Z}$ such that $1 = \left( a +b \sqrt{7} \right) \left( 5 +2 \sqrt{7} \right)$, which would yield $5 a +14 b = 1$ and $2 a +5 b = 0$ since $\sqrt{7}$ is irrational. This is impossible. Thus, $\text{Ker}(\varphi) = 3 \mathbb{Z}$. Finally, note that $\sqrt{7} = 2 +\left( 8 -3 \sqrt{7} \right) \left( 5 +2 \sqrt{7} \right)$. Therefore, for all $a, b \in \mathbb{Z}$, we have $$a +b \sqrt{7} +\left\langle 5 +2 \sqrt{7} \right\rangle = \varphi(a +2 b) \, \text{,}$$ which proves that $\varphi$ is surjective. P.S.: In order to avoid possible confusions, I denote by $\left\langle 5 +2 \sqrt{7} \right\rangle$ the ideal of $\mathbb{Z}\left[ \sqrt{7} \right]$ generated by $5 +2 \sqrt{7}$.
Show that there is no non-constant random variable such that Chebyshev Inequality becomes an equality for all $x>0$
Yes. Equality is impossible when $ 0 < x < \sigma$, so no distribution with $\sigma \ne 0$ can satisfy this for all $x < 0$. It's also worth noting that if we were to assume there is such a distribution with finite $\sigma$, it would have to satisfy $$ \frac{d}{dx}\left[\frac{\sigma^2}{x^2} \right]=-2\frac{\sigma^2}{x^3} = \frac{d}{dx} P(|X-\mu|\ge x) = \frac{d}{dx}\left[\int_{-\infty}^{\mu-x} p_X(x)dx + \int_{\mu+x}^\infty p_X(x)dx\right] =-p_X(\mu+x) - p_X(\mu - x), $$ which implies $$ \frac{p_X(\mu+x) + p_X(\mu - x)}{2} = \frac{\sigma^2}{x^3} $$ And any distribution that falls off as $x^{-3}$ has $\sigma = \infty$. So it definitely isn't going to work.
Does equality of the sum of two such series imply equality of each term of that series?
One way to tackle this is to do induction on $m$. Assume that $m=1$, then $$|a(1)-x|=\sum_{j=1}^n|b(j)-x|$$ for all $x\in\mathbb R$. Thus by (reverse) triangle inequality, $$\sum_{j=2}^n|b(j)-x|=|a(1)-x|-|b(1)-x|\le ||a(1)-x|-|b(1)-x||$$ $$\le |(a(1)-x)-(b(1)-x)|=|a(1)-b(1)|.$$ Suppose for a contradiction that $n>1$ so that $a(1)\ne b(1)$, then by choosing $x=b(n)-|a(1)+b(1)|+1$, we obtain a contradiction since the far left becomes strictly bigger than the far right. Can you continue the inductive argument?
Help with a troublesome double integral
The integral to evaluate is $$I(s):=-2i\int_{0}^{\infty}\int_{0}^{\infty}\frac{\cos{\left(t\log{(1-ix)}\right)}-\cos{\left(t\log{(1+ix)}\right)}}{t\left(e^{2\pi x}-1\right)\left(e^{2\pi t/s}-1\right)}\mathrm{d}x\mathrm{d}t,$$ where $s\in\mathbb{C}$ is a complex parameter. Right away we notice that the integration with respect to the variable $x$ is much more formidable than that with respect to $t$, so the first thing we do is interchange the order of integration: $$\begin{align}I(s)&=-2i\int_{0}^{\infty}\int_{0}^{\infty}\frac{\cos{\left(t\log{(1-ix)}\right)}-\cos{\left(t\log{(1+ix)}\right)}}{t\left(e^{2\pi x}-1\right)\left(e^{2\pi t/s}-1\right)}\mathrm{d}t\mathrm{d}x\\ &=-2i\int_{0}^{\infty}\frac{\mathrm{d}x}{\left(e^{2\pi x}-1\right)}\int_{0}^{\infty}\frac{\cos{\left(t\log{(1-ix)}\right)}-\cos{\left(t\log{(1+ix)}\right)}}{t\left(e^{2\pi t/s}-1\right)}\mathrm{d}t.\end{align}$$ The integration w.r.t. $t$ is still too complicated to evaluate, but notice that the term in the numerator of the integrand involving a difference of cosine functions is highly suggestive of the Fundamental Theorem of Calculus. Indeed, using the simple identity $\int_{a}^{b}\sin{(t\,\omega)}\,\mathrm{d}\omega=\frac{\cos{(t\,a)}-\cos{(t\,b)}}{t}$ we can write, $$\int_{\log{(1-ix)}}^{\log{(1+ix)}}\sin{(t\,\omega)}\,\mathrm{d}\omega=\frac{\cos{(t\,\log{(1-ix)})}-\cos{(t\,\log{(1+ix)})}}{t}.$$ Substituting this expression back into the integral over $t$ not only absorbs the problematic factor of $t$ in the denominator of the integrand, it also gives us the option of performing another change-of-order-of-integration magic trick. $$\begin{align}I(s)&=-2i\int_{0}^{\infty}\frac{\mathrm{d}x}{\left(e^{2\pi x}-1\right)}\int_{0}^{\infty}\frac{\mathrm{d}t}{\left(e^{2\pi t/s}-1\right)}\int_{\log{(1-ix)}}^{\log{(1+ix)}}\sin{(t\,\omega)}\,\mathrm{d}\omega\\ &=-2i\int_{0}^{\infty}\frac{\mathrm{d}x}{\left(e^{2\pi x}-1\right)}\int_{0}^{\infty}\int_{\log{(1-ix)}}^{\log{(1+ix)}}\frac{\sin{(t\,\omega)}}{\left(e^{2\pi t/s}-1\right)}\,\mathrm{d}\omega\mathrm{d}t\\ &=-2i\int_{0}^{\infty}\frac{\mathrm{d}x}{\left(e^{2\pi x}-1\right)}\int_{\log{(1-ix)}}^{\log{(1+ix)}}\int_{0}^{\infty}\frac{\sin{(t\,\omega)}}{\left(e^{2\pi t/s}-1\right)}\,\mathrm{d}t\mathrm{d}\omega\end{align}$$ So we turn our attention to solving the integral $\int_{0}^{\infty}\frac{\sin{(t\,\omega)}}{e^{2\pi t/s}-1}\mathrm{d}t$. This integral (see notes at bottom) is, $$\int_{0}^{\infty}\frac{\sin{(t\,\omega)}}{e^{2\pi t/s}-1}\mathrm{d}t=\frac{1}{2\omega}\left(\frac{s\omega}{2}\coth{\left(\frac{s\omega}{2}\right)}-1\right),$$ and the integral becomes, $$\begin{align}I(s)&=-2i\int_{0}^{\infty}\frac{\mathrm{d}x}{\left(e^{2\pi x}-1\right)}\int_{\log{(1-ix)}}^{\log{(1+ix)}}\left(\frac{s\omega}{2}\coth{\left(\frac{s\omega}{2}\right)}-1\right)\frac{\mathrm{d}\omega}{2\omega}\\ &=-i\int_{0}^{\infty}\frac{\mathrm{d}x}{\left(e^{2\pi x}-1\right)} \int_{\log{(1-ix)}}^{\log{(1+ix)}} \left(\frac{s\omega}{2}\coth{\left(\frac{s\omega}{2}\right)}-1\right)\frac{\mathrm{d}\omega}{\omega}\\ &=-i\int_{0}^{\infty}\frac{\mathrm{d}x}{\left(e^{2\pi x}-1\right)} G(x,s).\end{align}$$ See appendix 2 for details on the function $G(x,s)$. It is seen to have the form $G(x,s)=f(ix)-f(-ix)$, and so we can apply the Abel-Plana formula to the final integral $I(s)$: $$\begin{align} I(s)&=-i\int_{0}^{\infty}\frac{f(ix)-f(-ix)}{\left(e^{2\pi x}-1\right)}\mathrm{d}x\\ &=\int_{0}^{\infty}f(x)\,\mathrm{d}x+\frac12f(0)-\sum_{n=0}^{\infty}f(n) \end{align}$$ Appendix 1: $\tau=\frac{2\pi t}{s}$, $t=\frac{s}{2\pi}\tau$, $\alpha:=\frac{s\omega}{2\pi}$ $$\begin{align}\int_{0}^{\infty}\frac{\sin{(t\,\omega)}}{e^{2\pi t/s}-1}\mathrm{d}t&=\frac{s}{2\pi}\int_{0}^{\infty}\frac{\sin{(\omega\frac{s}{2\pi}\tau)}}{e^{\tau}-1}\mathrm{d}\tau\\ &=\frac{s}{2\pi}\int_{0}^{\infty}\frac{\sin{(\alpha\,\tau)}}{e^{\tau}-1}\mathrm{d}\tau\\ &=\frac{s}{2\pi}\int_{0}^{\infty}\frac{\sin{(\alpha\,\tau)}\,e^{-\tau}}{1-e^{-\tau}}\mathrm{d}\tau\\ &=\frac{s}{2\pi}\int_{0}^{\infty}\sin{(\alpha\,\tau)}\,e^{-\tau}\sum_{n=0}^{\infty}e^{-n\tau}\mathrm{d}\tau\\ &=\frac{s}{2\pi}\sum_{n=0}^{\infty}\int_{0}^{\infty}\sin{(\alpha\,\tau)}\,e^{-(n+1)\tau}\mathrm{d}\tau\\ &=\frac{s}{2\pi}\sum_{n=0}^{\infty}\frac{\alpha}{\alpha^2+(n+1)^2}\\ &=\frac{s}{2\pi}\frac{\pi\alpha\coth{(\pi\alpha)}-1}{2\alpha}\\ &=\frac{1}{2\omega}\left(\frac{s\omega}{2}\coth{\left(\frac{s\omega}{2}\right)}-1\right)\end{align}.$$ Appendix 2: $$\int\left(\frac{s\omega}{2}\coth{\left(\frac{s\omega}{2}\right)}-1\right)\frac{\mathrm{d}\omega}{\omega}=\log{\left(\sinh{\left(\frac{s\omega}{2}\right)}\right)}-\log{\omega}+\text{constant}$$ $$\begin{align} G(x,s):&=\int_{\log{(1-ix)}}^{\log{(1+ix)}}\left(\frac{s\omega}{2}\coth{\left(\frac{s\omega}{2}\right)}-1\right)\frac{\mathrm{d}\omega}{\omega}\\ &=\left(\log{\left(\sinh{\left(\frac{s\log{(1+ix)}}{2}\right)}\right)}-\log{\log{(1+ix)}}\right)-\left(\log{\left(\sinh{\left(\frac{s\log{(1-ix)}}{2}\right)}\right)}-\log{\log{(1-ix)}}\right)\\ &=f(ix)-f(-ix), \end{align}$$ where $f(z):=\left(\log{\left(\sinh{\left(\frac{s\log{(1+z)}}{2}\right)}\right)}-\log{\log{(1+z)}}\right)$. In response to OP's edit #2: For $\Re{(\gamma)}>|\Re{(\beta)}|$, $$\int_{0}^{\infty}\frac{\cos{\left(\alpha\,t\right)}\sinh{\left(\beta\,t\right)}}{e^{\gamma\,t}-1}\,\mathrm{d}t = \frac{\beta}{2\left(\alpha^2+\beta^2\right)}-\frac{\pi}{2\gamma}\cdot\frac{\sin{\left(\frac{2\pi\beta}{\gamma}\right)}}{\cosh{\left(\frac{2\pi\alpha}{\gamma}\right)}-\cos{\left(\frac{2\pi\beta}{\gamma}\right)}}.$$ The above integral is formula $4.132.4$ of Gradstheyn's Table of integrals.
Pigeonhole Principle - Floor versus Ceiling
IMHO the best statement of the pigeonhole principle is... neither of these. My reason for saying this is that both of them encourage mindless application of formulae without thinking about what they actually signify. (Though it is clear from your question that you have not fallen into this trap :) Here is, IMO, the best formulation of the (generalised) PHP: if $n$ pigeonholes contain, altogether, more than $kn$ pigeons, then there is a pigeonhole which contains more than $k$ pigeons. Addendum. Your professor and classmates won't find a counterexample, because the two formulae do always give the same result. To see this, write $m=qn+r$ with $0\le r<n$. It is easy to calculate that if $r=0$ then both expressions are $q$, while if $r>0$ then both expressions are $q+1$.
which of the following is impossible?
They're all impossible. The points $A,D$ are on the line $αx + βy = 10$. The points $B,C$ are in the open half-plane $αx + βy < 10$. Since $A,B,C,D,E$ are ordered counterclockwise, it follows that $E$ is in the open half-plane $αx + βy > 10$.
Limiting behavior of this sequence of integers
Let $$\Omega(n) = \sum_{p^k \| n} k$$ Then $$ \beta(N)= \sum_{n \le N} \Omega(n) = \Omega(N!)= \sum_{p^k \le N} \lfloor N/p^k \rfloor= N\sum_{p^k \le N} \frac{1}{p^k}+\mathcal{O}(N) = N \log \log N+\mathcal{O}(N) $$ by Mertens 2nd theorem
Simplex: conflicting constraints in linear program
We know that $s \geq 0$ and $t \geq 0$, hence $-s-t \leq 0$. but the first row of the tableu says $-s-t =5 \leq 0$ which is a contradiction. Also, note that all basic variable should be nonnegative, here $s=-5$.
Partial $x$-derivatives of $\frac{xy}{x^2+y^2}$
When taking a limit $\lim_{t \to 0}$, then $t$ is not zero inside the limit. If $(x,y) = 0$, then (with $t \neq 0$) $f(t,0) = 0$. If $(x,y) \neq 0$, then (as long as $(x+t,y) \neq 0$, which can only occur for one non zero value of $t$) $f(x+t,y) = {(x+t)y \over (x+t)^2+y^2 }$. It is clear that $f$ is smooth for any $(x,y) \neq 0$. This is because multiplication, addition and division (with non zero divisor, of course) is smooth. In particular, ${\partial f(x,0) \over \partial x} = {y (y^2-x^2) \over (x^2+y^2)^2 }$. Since $f(x,0) = 0$ for all $x$, it is clear that ${\partial f(x,0) \over \partial x} = 0$ for all $x$.
Prove a result on the size of the minimal set that generates a finite abelian group
A clean way to do it consists perhaps in noting that if $p$ is a prime dividing $m_{1}$, then $G$ has a quotient group $Q$ isomorphic to $\mathbb{Z}_{p}^{t} = \mathbb{Z}_{p} \oplus \dots \oplus \mathbb{Z}_{p}$. If in $$G \cong \mathbb{Z}_{m_1} \oplus\cdots\oplus \mathbb{Z}_{m_t}$$ a generator of the $\mathbb{Z}_{m_{i}}$ summand is $a_{i}$, then $$ Q = G / \langle p a_{1}, \dots , p a_{k} \rangle. $$ In fact $$ Q = \langle a_{1}, \dots , a_{k} \rangle / \langle p a_{1}, \dots , p a_{t} \rangle \cong \bigoplus_{i=1}^{t} \langle a_{i} \rangle / \langle p a_{i} \rangle \cong \mathbb{Z}_{p}^{t}, $$ as the order of each $a_{i}$ is a multiple of $p$. Even without appealing to the theory of vector spaces (see the comment by OP below), a subgroup of $Q$ generated by $k$ elements is easily seen to have at most $p^{k}$ elements, so $Q$ cannot be generated by less than $t$ elements. Thus $G$, too, cannot be generated by less than $t$ elements, as any set of generators for $G$ will induce a set of generators for $Q$. Now, as OP already noted, the isomorphism $$G \cong \mathbb{Z}_{m_1} \oplus\cdots\oplus \mathbb{Z}_{m_t}$$ shows that $G$ has indeed a set of generators of size $t$.
How do I tune an IMC(Internal Model Control) - controller?
I am not familiar with IMC, but by just evaluating the block diagram it can be shown that the transfer function from $w$ to $y$ is equal to $$ G_{wy}(s) = \frac{Q(s)\,G(s)}{1 + Q(s)\left(G(s) - G_m(s)\right)}. \tag{1} $$ If $G(s)$, $G_m(s)$ and $G_{wy}(s)$ are given, then you could directly solve for $Q(s)$ such that $(1)$ holds. Doing so yields the following expression $$ Q(s) = \frac{G_{wy}(s)}{G_{wy}(s) \left(G_m(s) - G(s)\right) + G(s)}. \tag{2} $$ So for $$ G(s) = \frac{20}{s^2 + 4\,s + 40}, \quad G_m(s) = \frac{39}{s^2 + 5.3\,s + 38}, \quad G_{wy}(s) = \frac{25}{s^2 + 4\,s + 25} $$ yields $$ Q(s) = \frac{5\,s^4 + 46.5\,s^3 + 496\,s^2 + 1820\,s + 7600}{4\,s^4 + 37.2\,s^3 + 431.8\,s^2 + 1388\,s + 7800} $$ however I do not know how robust this controller will be in changes in $G(s)$.
Multivariable Calculus, rate of change.
Let $\vec{v}=\langle4,2\rangle$. Then a vector orthogonal to $\vec{v}$ can be found by interchanging the components and changing one of the signs, so we can take $\vec{t}=\langle2,-4\rangle$ to get a tangent vector corresponding to the clockwise direction; and normalizing this vector gives the unit tangent vector $\vec{u}=\langle\frac{1}{\sqrt{5}},-\frac{2}{\sqrt{5}}\rangle$. Now find the directional derivative of T in the direction of $\vec{u}$ using $\;\;\;D_{\vec{u}}T(1,1)=\vec{\nabla T}(1,1)\cdot\vec{u}$
Is there a quicker way to show that a set of vectors is a spanning set?
The determinant test is worthwhile. Let $M$ be a matrix whose column vectors are from $S$. The vectors in $S$ form a basis if and only if $det(M) \neq 0$. I figure I'll edit my answer so nobody has to dig through all the comments (and in light of the edit on the OP). A basis is a maximally independent set. It also spans the space. So if $|S| > dim(V)$, then it is necessary to find a basis to show $S$ spans $V$. There are $\binom{|S|}{dim(V)}$ basis candidates to choose from $S$. The general way to construct a basis is to start with an empty set and add in vectors. If the vector you are examining makes the set linearly dependent, throw it out. The multiples test is one test. If you see $v = kx$, for some $x$ in your set and $k \in \mathbb{F}$, then $v$ and $x$ are linearly dependent. You could also row-reduce to determine linear (in)dependence. Once you can construct a square matrix, you can use the determinant test.
Calculating Entropy and Information Gain of a Variable
$$\mathsf H(Y\mid X) = - \sum_x p_X(x) \sum_y p_{Y\mid X}(y\mid x)\log p_{Y\mid X}(y\mid x)$$ or $$\mathsf H(Y\mid X) ~=~ \sum_{x,y} p_{X,Y}(x,y)\log \frac{p_{X}(x)}{p_{X,Y}(x,y)}$$ Hint: (the latter will be a sum of six terms.)
generators of the intersection and addition of two ideals genrated by positive integers m,n in athe ring of integers
Hint: $x\in A$ means $x$ is a multiple of $m$, and $x\in B$ means $x$ is a multiple of $n$, so $x\in A\cap B$ means $x$ is a common multiple of $m$ and $n$.
Understand the free group universal property applied to $D_n$
You're asked to show that for every $(a,b) \in (\mathbb{Z}/n)^2$, there is a group hom $f_{(a,b)} : D_{n} \to D_{n}$ defined by $f_{(a,b)}(r) = r^a$ $f_{(a,b)}(s) = r^b s$ Alright, how can we do this? The given presentation $D_n = \langle r,s ~|~ r^n = s^2 = srsr = 1 \rangle$ says $D_n$ is a quotient of $\langle r, s \rangle$ (the free group with generators $r$ and $s$), and the kernel $N$ is generated by the relations shown. Using the universal property of the free group, we know $f_{(a,b)}$ as defined above is a perfectly good function from $\langle r, s \rangle \to D_n$. We said what it does to generators, easy. But we don't care about free groups, we care about $D_n$! To show that $f_{(a,b)} : \langle r, s \rangle \to D_n$ descends to a map $D_n \to D_n$, we use the universal property of the quotient group. Since $D_n = \langle r, s \rangle / N$, all we need to do is show that $N \subseteq Ker f_{(a,b)}$. The last piece of the puzzle is to remember $N$ is the smallest normal subgroup such that $r^n, s^2, srsr \in N$. So any normal subgroup $K$ containing $r^n$, $s^2$, and $srsr$ must also contain $N$. Thus, it suffices to show that $r^n, s^2, srsr \in Ker f_{(a,b)}$. $f_{(a,b)}(r^n) = (r^a)^n = (r^n)^a = 1^a = 1$ $f_{(a,b)}(s^2) = (r^b s)(r^b s) = (r^b)(s r^b s) = r^b r^{-b} = 1$ $f_{(a,b)}(srsr) = (r^b s)(r^a)(r^b s)(r^a) = s r^{-b} r^a r^b r^{-a} s = s1s = 1$ So we see $N \subseteq Ker f_{(a,b)}$ for each $(a,b)$. The claim follows. I hope this helped ^_^
Solving a congruence system (which has other solutions)
Since upon elimination the two equations results in $44y \equiv 0 \pmod {11}$ which is true for all integers $y$, the two equations are not independent. As a result, we must express the values of $x$ in terms of $y$ using one of the equations. For example: $$3x+7y\equiv 5 \pmod {11} \implies 12x + 28y \equiv 20 \pmod {11} \implies x\equiv 9+5y\pmod {11}$$ To verify that $(9+5y,y)$ gives all solutions, we substitute this to the second equation, and we have: $$8x+4y \equiv 72+44y\equiv 6\pmod {11}$$ so both equations are satisfied. $(9,0), (0,7), (7,4)$ are particular solutions, modulo $11$.
Which of the following statements are true on countable sets
Both statements are false, actually. For (1): You've already shown that every such series converges; it's not hard to show that any two series which differ at some point (e.g. $a_n\not=b_n$ for some $n$) converge to different reals. Thus, you're really just counting the number of sequences of possible $a_i$s. Using this, can you construct a bijection between the set of reals which are representable by such a series, and some set you already know is uncountable? (Or, apply a diagonal argument directly.) For (2): Can you think of two uncountable sets whose intersection is "very small"?
Explain that two parametrizations represent the same curve
Some helps, (1) the sustitution (exp. $u=t^5$) must be a diffeomorphism, (2) start point and end point (if I understand your question) depends at least on the orientation of the two curves.
how do i find the distance between a vector and a line or plane
How it ends up being three : The first is a 2-D problem so it should be very easy to visualise. I encourage you to draw a diagram first and it'll all be clear. The line spanned by $\vec v = (1,0)$ is the $x-$axis in the standard cartesian coordinate system. The vector $\vec y = (2,3)$ starts at the origin and ends at the point $(2,3)$. Now, the definition of distance, in this case, treating $y$ as a point, is the shortest distance from the point to the line. What is the shortest distance? It is indeed the perpendicular one. Therefore, you need to find the perpendicular distance from the point $(2,3)$ to the $x$ axis. Can you now see why it is $3$? (i.e Think about the length of $AD$ in the diagram below). As for the second question, here's a hint. In general, the distance in Euclidean space between two vectors $ \vec u = \left< u_1, \cdots , u_n \right>$ and $\vec v = \left< v_1, \cdots , v_n \right>$ is given by$$d(\vec{u}, \vec{v}) = \| \vec{u} - \vec{v} \| = \sqrt{(u_1 - v_1)^2 + (u_2 - v_2)^2 ... (u_n - v_n)^2}$$ The distance between a point $P$ in space and the line $L$ pwith parametric equation $r(\vec t) = \vec {OQ} + t\vec u$ is given by $$d(P,L) = \frac{| \vec {PQ} \times \vec u|}{|\vec u|}$$ I've presented both formulas because the question is phrased unconventionally, at least for me. If you have to find the distance between the line spanned by the vector $\vec v = (4,5,6)$ and the vector $\vec u = (1,2,3)$, then you'd have to take $\vec u$ as a position vector. That means, in the context of the problem, you can treat it as the point $P$ from the formula. Next, you'll have to find the equation, preferably in parametric form, of the line spanning $\vec v$. To do that, you can simply take $(4,5,6)$ as the point $Q$ on the line. Subract the $0$ vector to get $\vec {OQ}$. Then take any non-zero multiple of the vector $\vec v = (4,5,6)$ and that'll be your direction vector $\vec u$. That is enough information to come up with the parametric equation of the line. After that, you'll have all the information you need to use the formula for the distance between a line and a point in space.
Methods of determining if a non-primitive polynomial is irreducible in a ring
Eisenstein's criterion works for proving that non-primitive polynomials are irreducible in $\mathbb{Q}[x]$. One then might hope to use Gauss's Lemma to prove that a polynomial with integer coefficients which is irreducible in $\mathbb{Q}[x]$ is also irreducible in $\mathbb{Z}[x]$, which then requires primitivity. So, because $2$ doesn't divide $1$ (coefficient of $x^4$) $2$ divides $0$ (coefficient of $x^3$) $2$ divides $0$ (coefficient of $x^2$) $2$ divides $2$ (coefficient of $x$) $2$ divides $2$ (coefficient of $1$) $2^2=4$ doesn't divide $2$ (coefficient of $1$) by Eisenstein's criterion the polynomial $x^4+2x+2$ is irreducible in $\mathbb{Q}[x]$. In fact, the polynomial $x^4+2x+2$ is primitive, since $\gcd(1,0,0,2,2)=1$, so the polynomial is irreducible in $\mathbb{Z}[x]$ too.
Correct non-inductive proof of Euler's Formula $|V|+|F|-|E|=2$?
Your proof is not a non-inductive proof. The steps where you construct $G$ from a smaller graph by adding edges, and check that $|V|-|E|+|F|=2$ still holds because the left-hand side does not change, are the inductive steps of your proof. This is fine! Induction is great. But you are falling into the classic "induction trap" that everyone falls into when writing induction proofs about graphs. Whenever you write an inductive step that "goes up", starting with a graph for which your result holds and making it bigger, you set yourself the goal of showing that every graph can be obtained by growing your base case in this way. You are not careful about this: In the cyclic case, you write "Clearly, any planar graph with a cyclic subgraph can be constructed by adding more points and edges to $C_3$". This should set off alarm bells. Whenever you are justifying an intuitively clear statement with "clearly", you haven't written a proof. In the acyclic case, you haven't even stated the claim that your proof needs to work: that every tree can be built by repeatedly adding leaves to a line-graph. (Do you mean a path graph? A line graph is something else.) We can avoid the problem of proving these hard-to-prove statements by writing an inductive step that "goes down" instead. Here, we assume that the result holds for all small graphs, and take a slightly larger graph; then, delete a vertex or edge from it, reducing it to a graph that your inductive hypothesis applies to. For example, when proving Euler's formula for acyclic graphs, you could first argue (in your favorite way) that any such graph contains a vertex of degree $1$, then delete that vertex together with the edge out of it. This reduces the graph to a smaller one for which you have already verified Euler's formula. Here, there is no worry that we've gotten all of the graphs we're considering in this case, because we started with an arbitrary such graph. Aside from this, your strategy looks good. You should just rethink your arguments in terms of removing an edge or vertex, rather than adding them.
Is there a finite commutative semigroup $S$ with $S^2 = S$ which is not a monoid?
The two minimal examples are the semigroup $S = \{a, b\}$ defined by $aa = ba = a$, $ab = bb = b$ and its dual $\tilde S$, defined on the same support by $aa = ab = a$, $ba = bb = b$.
Show that $\lim_{n \rightarrow \infty} \prod_{k=1}^{n} (1-e^{-ka})$ exists an is positive
I assume your question is a product of the form $\displaystyle \lim_{n \rightarrow \infty}\prod_{k=1}^n (1 - e^{-ka})$. Any product of the form $\displaystyle \lim_{n \rightarrow \infty}\prod_{k=1}^n (1 - a_k)$ converges iff $\displaystyle \lim_{n \rightarrow \infty}\sum_{k=1}^n a_k$ converges. In your cases, $a_k = e^{-ka}$. $\displaystyle \lim_{n \rightarrow \infty}\sum_{k=1}^n a_k = \lim_{n \rightarrow \infty}\sum_{k=1}^n e^{-ka}$ converges to $\frac{1}{e^a-1}$ $(\text{geometric series with }e^{-a}<1\text{ as }a>0)$. Hence, the infinite product $\displaystyle \lim_{n \rightarrow \infty}\prod_{k=1}^n (1 - e^{-ka})$ converges. EDIT The equivalence of the convergence of $\displaystyle \prod_{n=1}^{\infty} (1 + a_n)$ and $\displaystyle \sum_{n=1}^{\infty} a_n$. Assume that $\displaystyle \sum_{n=1}^{\infty} a_n$ converges. This implies $\displaystyle \lim_{n \rightarrow \infty} a_n = 0$. Hence, $\displaystyle \lim_{n \rightarrow \infty} \frac{\log(1+a_n)}{a_n} = 1$. Now consider $b_n = \log(1+a_n)$. By limit comparison test, since $0<\frac{b_n}{a_n}<\infty$, we have that $\displaystyle \sum_{n=1}^{\infty} b_n$ converges. (Intuitively, what the limit comparison test means is that the tail sums differ only by a factor and hence the convergence of one implies the other.) Hence, $\displaystyle \sum_{n=1}^{\infty} \log(1+a_n)$ converges which essentially means $\displaystyle \prod_{n=1}^{\infty} (1+a_n)$ converges. Similarly, you can argue out that if $\displaystyle \prod_{n=1}^{\infty} (1+a_n)$ converges, then $\displaystyle \sum_{n=1}^{\infty} a_n$ converges.
Conformal branched cover from the hyperbolic plane to the euclidean plane
Yes, there is. Here is a geometric construction (which could easily be visualized if one wants to do so): Let $S$ be a square (i.e., a quadrilateral with 4 equal sides and 4 equal angles) in the hyperbolic plane with angles of $\pi/4$, and let $f:S \to [0,1]^2$ be the conformal map of $S$ to the unit square, mapping vertices to vertices. Now the map $f$ can be extended by reflection to a map of the hyperbolic plane onto the Euclidean plane. The resulting map will have branch points of degree two over all points in the integer lattice, since it is angle-doubling at those points.
Using Lebesgue's dominated convergence theorem to show a function is continuous.
Because $u$ is integrable (and thus measurable), the product $x \mapsto \cos(xt) u(x)$ of two measurable functions is measurable. Let $(t_n)$ be a sequence such that $t_n \to t_0$. Now we can consider the sequence of measurable functions $f_n(x) = \cos(x t_n) u(x)$. Now $$ \lim_{n \to \infty} U(t_n) = \lim_{n \to \infty} \int u(x) \cos (x t_n) \, dx = \lim_{n \to \infty} \int f_n(x) \, dx\,. $$ Because $|f_n(x)| \leq |u(x)|$ for all $x$, $|u|$ is integrable and $cos(t)$ is continuous, the dominated convergence theorem says that $$ \lim_{n \to \infty} \int f_n(x) \, dx\ = \int \lim_{n \to \infty} f_n(x) \, dx = \int u(x) \cos (x t_0) \, dx = U(t_0)\,. $$ Thus $$ \lim_{n \to \infty} U(t_n) = U(t_0)\,, $$ which means that $U$ is continuous.
Proof of upper triangular matrices
By definition, $$AB=\left(\sum_{k=1}^na_{ik}b_{kj}\right)\;,\;\;\text{and for}\;\;i> j\;\;\;\sum_{k=1}^n a_{ik}b_{kj}=0\;,\;\;\text{since in any case}$$ $\;i>k\;\;or\;\;k>j\;$ , and thus the sum above is all zeros. (2)-(3) are, imo, even easier: $\;\lambda A=\left(\lambda a_{ij}\right)\;$, and clearly $\;\lambda a_{ij}=0\;$ for $\;i>j\;$ . You try now $\;A+B=\left(a_{ij}+b_{ij}\right)\;$
Can this quadratic formula of inner products be simplified
If $x_c, x_i, r_c, n \in \mathbb{R}^N$, then $$\langle x_c - x_i , n \rangle \pm \sqrt{ \langle n, x_c - x_i \rangle^2 + \langle r_c , r_c \rangle - \langle x_c - x_i , x_c - x_i \rangle }$$ describes the distance between $x_i$ and the point where a line starting at $x_i$, direction unit vector $n$, intersects a ($N-1$)-sphere centered at $x_c$ with radius $r_c$ . It is much easier to recognize, if you switch to traditional vector algebra notation, $$\left( \vec{x}_c - \vec{x}_i \right) \cdot \hat{n} \pm \sqrt{ \left ( \hat{n} \cdot \left ( \vec{x}_c - \vec{x}_i \right ) \right )^2 + r_c^2 - \left ( \vec{x}_c - \vec{x}_i \right ) \cdot \left ( \vec{x}_c - \vec{x}_i \right ) }$$ Although ordinarily $r_c$ is a scalar, it can just as well be a vector, above. (I.e., any vector from the center of the sphere to any point on its surface.)
Let $f_n: D \rightarrow \mathbb{R}: f_n(x) = g(x)^n, n≥1$. Necessary and sufficient conditions such that $f_n$ converges?
Your conditions on $g$ are also necessary, since of $|g(x)|>1$ for some $x\in[a,b]$ then the sequence $(f_n(x))$ does not converge. You may also let $g(x)=1$,for some (or all) $x$ for guarantee pointwise convergence For uniform convergence, if $M$ is the maximum of $|g|$ on the interval (which exists because it is compact), you may require that $M<1$: in this case, $f_n\to 0$ as $n\to \infty$ so, given $\varepsilon>0$, $$ |f_n(x)|=|g(x)|^n\leq M^n<\varepsilon $$ If $n$ is large enough. Note that this is valid for every $x$ in the domain, so the convergence is uniform. If $M=1$, I'm going to show that the convergence is not uniform. Let $c\in[a,b]$ be such that $|g(c)|=M$. You can take a sequence $(x_n)$ of points such that $x_n\to c$. By continuity, the sequence $(|g(x_n)|)$ will converge to $1$. If $m\in\mathbb{N}$, then there's a $x_{n_m}$ in the sequence such that $|g(x_{n_m})|>\tfrac{1}{2^{1/m}}$ This follows from the fact that $\lim\limits_{n\to\infty}|g(x_n)|=1$ so we can get as closer as we want to 1 and since $\tfrac{1}{2^{1/m}}<1$, the sequence will eventually be greater than $\tfrac{1}{2^{1/m}}$ and we can let $x_{n_m}$ be one of those numbers that satisfies this. By this way we can construct a subsequence $(x_{n_m})$ such that $|f_m(x_{n_m})|=|g(x_{n_m})|^m\geq 1/2$. Note that this imply that the convergence is not uniform, because if we take $0< \varepsilon<1/2$ then you can not find an $N\in\mathbb{N}$ such that $n\geq N$ imply $|f_n(x)|<\varepsilon$ for all $x\in[a,b]$ since there's always and $m>N$ such that $|f_m(x_{n_m})|\geq 1/2>\varepsilon$. Briefly: if the maximum of $g$ is less than 1, there's uniform convergence, if not, there's no uniform convergence (and maybe not even pointwise convergence).
The "sum" of all fear
What is the value of $cos(k\pi)$ when $k$ is 0, 1, 2,... What is the value of $(-1)^k$ when $k$ is 0, 1, 2,... You will see that only two possible values are involved. Then look at the parity of $k$.
Abbott's Understanding Analysis question 6.2.14
$(f_{n}(x_{1}))_{n}$ is a sequence of numbers. The author claims that we can find a sequence $(n_{k})_{k}$ with $n_{1}\leq n_{2}\leq\cdots$ such that $(f_{n_{k}}(x_{1}))_{k}$ converges. Next, note that $(f_{n_{k}})_{k}$ is a sequence of functions. To simplify notation, the author defines $f_{1,k}=f_{n_{k}}$ (the $1$ is to stress that the sequence $(n_{k})_{k}$ was picked with respect to $x_{1}$). Therefore, the sequence of functions $(f_{n_{k}})_{k}$ can now be written $(f_{1,k})_{k}$. Next, note that $(f_{1,k}(x_{2}))_{k}$ is a sequence of numbers corresponding to evaluating the functions $f_{1,k}$ at the point $x_{2}$.
Prove that the following linear transformation is surjective
Hint: Consider the fundamental theorem of linear transformations (i.e. the Rank-Nullity theorem). What is the dimension of $\text{Null}(T)$ and $\text{Rank}(T)$?
A few questions about elementary counting problems
"Only the first student to answer a particular question correctly receives credit for that question." This would be meaningless if each particular question only went to one student. There are not four different possibilities because you can rotate the die around a vertical axis so the face you labelled is in any of four directions. Note that in the case of dimension $2$, ${n+k \choose k} = \frac{(n+k)!}{n! k!}$ which in multinomial coefficient notation would be ${n+k \choose n, k}$.
Explicit form of a lift $\tilde f: \tilde X_1 \to \tilde X_2$ of a continuous map $f: X_1 \to X_2$
We have a projection $p_1: \tilde X_1 \to X_1$, so first write $g = fp_1: \tilde X_1 \to X_2$. Now $\tilde X_1$ is simply connected, so this lifts to the universal cover of $X_2$; call the lift $\tilde f: \tilde X_1 \to \tilde X_2$. So we have a commutative diagram $$\require{AMScd}\begin{CD} \tilde X_1 @>\tilde f >> \tilde X_2\\ @Vp_1 VV @Vp_2 VV\\ X_1 @>f>> X_2 \end{CD}$$ as desired.
Hilbert polynomial of disjoint union of lines in $\Bbb{P}^3$
The degree of the polynomial is one; this encodes the fact that your variety is of dimension $1$. The leading terms is $2$ (or, perhaps better, $2/1!$); this encodes the fact that your variety is of degree $2$ (a generic plane meets it in two points). The meaning of the constant terms is a bit more subtle, but here is one way to think about it: Suppose first instead you have two lines meeting in a point; equivalently, two co-planar lines. The two co-planar lines are a (degenerate) conic in the plane, and all planar conics have the same Hilbert polynomial, namely $2t+1$. Now two disjoint lines have $1$ more point than two co-planar lines (because the two co-planar lines share a point in common), and this is reflected in the fact that the Hilbert polynomial is $2t + 2 = (2t+1) +1$; so that gives some significance to the constant term. (This example is discussed somewhere in Hartshorne, maybe in the discussion of flat families near the end of Chapter III; if you degenerate you two skew lines into two lines that meet in a point in a flat family, you don't simply get two co-planar lines, but rather two co-planar lines with a non-reduced structure at the intersection point, reflecting the fact that the "extra point" can't disappear.)
How to prove that a 8x8 chessboard is impossible to fill with domino if I remove 2 white squares and 2 black squares from the chessboard?
Actually, you can cover the chessbord then with domino pieces, with one obvious exception: when a corner of the board was not removed, but its two closest neighbours were removed. This was proved by Colin Wright.
help me with derivation of Singular Value Decomposition by justin solomon
There is a step missing in your recounting of the computation. At some point one would have to demand that the $\vec x_i$ are unit vectors, $\|\vec x_i\|=1$. Then with the recognition that $\|\vec y_i\|=\sqrt{λ_i}$ for $\vec y_i=A\vec x_i$ one re-defines $$\vec y_i=\frac1{\sqrt{λ_i}}A\vec x_i$$ so that now also the $\vec y_i$ are unit vectors, $\|\vec y_i\|=1$. One could also have that formulated less confusing by using $$ \vec u_i=\frac{\vec x_i}{\|\vec x_i\|}~\text{ and }~\vec v_i=\frac{\vec v_i}{\|\vec v_i\|} $$ and assemble $U$ and $V$ from these vectors as columns.
If $a_n\ge nb_n$ and the sequence $(b_n)$ is unbounded, then the differences $a_{n+1}-a_n$ are also unbounded
Suppose that $(c_n)_n$ is bounded and let $c\in \Bbb R$ such that $c_n\leq c$ then $a_{n+1}\leq a_n+c$, by recurrence we obtain $a_n\leq nc+a_0$, so for all $n\geq 1$ $a_n\leq c+\dfrac{a_0}{n}\leq c+a_0$, now we have $b_n\leq \dfrac{a_n}{n}\leq c=a_0$ $i.e$ $(b_n)_n$ is boundede.
Taylor series of ln(xy) using 2 difference approaches yields different results - how?
The Taylor series near $(1,1)$ should be in powers of $(x-1)$ and $(y-1)$. Your second series is (essentially) of that form. Your first series is not. From the first series, you will need to do $$ 1-xy = -(x-1) - (y-1) - (x-1)(y-1) $$ and substitute that in. A complicated calclulation. The mixed terms $(x-1)^k(y-1)^l$ all cancel, of course.
symmetric matrix and eigenvalues
Well, first we must assume that $L$ is nonsingular otherwise this will not work. Taking $(LAL^T)^T$ we obtain again $LAL^T$. Now suppose we have a nonzero vector $x$. Then $x^T(LAL^T)x = (x^TL)A(L^Tx)$ and since $A$ is positive definite, we know this will be greater than $0$ and so $A'$ is also positive definite.
Determine the set of positions of $ M$
$$2 e^{i \left(t-\frac{\pi }{2}\right)} \sin (t)=2 e^{i \left(t-\frac{\pi }{2}\right)} \left(\frac{1}{2} i e^{-i t}-\frac{1}{2} i e^{i t}\right)=1-e^{2 i t},\;t\in[0,\pi]$$ Represents a circle $radius=1$ and center $(0,1)$
The implications of Completeness and the Continuity axiom for utility representation
Assumptions for existence of a utility function In your question you imply that the only assumptions needed on preference to produce a a real-valued function (a utility function) that represents those preferences are completeness and transitivity. This is incorrect. To represent preferences with a real-value function, you need (1) completeness, (2) transitivity, (3) continuous preferences, and (4) local non-satiation. You can find a proof of this fact in Microeconomic Theory by Mas-Collel, Whinston, and Greene. A simpler proof using strict monotonicity in place of local non-satiation can be found in other books (Reny and Jehle). The assumption of continuous preferences To clarify, the axiom of continuity is the assumption that the preferences are continuous, not the resulting utility function. The preference relation $\preceq$ is continuous if, for any bundle $x \in X$, the set of all bundles at least as good as $x$, $$ \succeq(x) \equiv \{y \in X \mid y \succeq x \}, $$ is closed. In contrast to a continuous utility function, note that a set of preferences has infinitely many possible representations. For example, the preference relation on levels of money $m \in \mathcal R$ such that $m_1 \succeq m_2$ iff $m_1 \geq m_2$ is continuous as describe above. Now, notice that these two utility functions are both valid representations: $$ u_1(m) = m $$ and $$ u_2(m) = m + \boldsymbol 1\{m \geq 2\}. $$ These are both valid because $u_i(m_1) \geq u_i(m_2)$ iff $m_1 \geq m_2$, $i = 1,2$. Thus $u_i(m_1) \geq u_i(m_2)$ iff $m_1 \succeq m_2$, $i = 1,2$. Example of Preferences that are not continuous The classic example of preferences that cannot be represented with a real-valued function are lexicographic preferences. The idea behind the proof is that a utility function can represent the ordering along the first category in the lexicographic ordering, but afterwards there "are not enough numbers left" to represent the others. See Mas-Collel p. 46 for details.
Proving a group homomorphism
You’re trying to prove the wrong thing. What you must prove is that if $p,q\in\Bbb R[x]$ (i.e., $p$ and $q$ are polynomials in the indeterminate $x$ with real coefficients), then $$\psi(p+q)=\psi(p)+\psi(q)\;.$$ And this is true: $$\begin{align*} \psi(p+q)&=(p+q)(3)\\ &=p(3)+q(3)\\ &=\psi(p)+\psi(q)\;. \end{align*}$$
Consecutive numbers question.
Hint: I would distinguish two cases: $$6n -5=m$$ $$5n+3=m+1$$ from here we get $$m=37$$ or $$5n+3=m$$ $$6n-5=m+1$$ so we get $$m=48$$
Does $\phi\vDash\bot$ imply that $\vDash\phi\to\bot$ if $\phi$ is a formula that has free variables?
We do indeed have $$\phi\models\perp\quad\iff\quad\models\phi\rightarrow\perp.$$ Your understanding of $\phi\models\perp$ is incorrect: we have $\phi\models\perp$ iff for every structure $\mathcal{M}$, every variable assignment which makes $\phi$ true makes $\perp$ true. Since no assignment can make $\perp$ true, this means that there is no structure $\mathcal{M}$ and variable assignment making $\phi$ true - or in other words, no structure has any tuple satisfying $\phi$. And this clearly matches up with $\models\phi\rightarrow\perp$ (your analysis of this is correct). EDIT: Specifically, the issue is that your definition of $\phi\models\psi$ in the variables-allowed context is incorrect: the "quantification over valuations" has to happen outside the $\models$-part on the right hand side. The right definition is $$\forall \mathfrak{A}, a\in\mathfrak{A}(\mathfrak{A}\models\phi[a]\implies\mathfrak{A}\models\psi[a]).$$ On the other hand, the relation you've defined - which I'll call "$\models_?$" for clarity - is equivalent to the following: $$\forall\mathfrak{A}[\forall a\in\mathfrak{A}(\mathfrak{A}\models\phi[a])\implies \forall a\in\mathfrak{A}(\mathfrak{A}\models\psi[a])].$$ To see the difference between these, consider the following formula in the language consisting of a single unary relation symbol $U$: $\phi(x):\quad$ If $U$ describes a nonempty proper subset of the domain, then $U(x)$. You can check that we have $\phi(x)\models_?\phi(y)$, which clearly should not hold. And this accounts for the apparent discrepancy in the OP. Using the right definition, we have $\phi\models\perp$ iff $$\forall \mathfrak{A},a\in\mathfrak{A}(\mathfrak{A}\models\phi[a]\implies \mathfrak{A}\models\perp)$$ iff $$\forall \mathfrak{A}\color{red}{\forall} a\in\mathfrak{A}(\neg\mathfrak{A}\models\phi[a])$$ as desired.
Fermat's eleventh $F_{11}$ represented as sums of two squares?
You need to know the nature of the complete factorization. Evidently $F_{11}$ is odd,squarefree and has exactly five prime factors $p,$ each of which satisfies $p \equiv 1 \pmod 4.$ http://www.prothsearch.net/fermat.html#Summary You do not need to know the specific primes, just the number. So there are 16 expressions as ordered pairs $(x,y)$ of positive numbers, $F_{11} = x^2 + y^2$ Oh, the numbers $(x,y)$ are enormous. http://www.prothsearch.net/fermat.html F11 = 319489 · 974849 · 167988556341760475137 · 3560841906445833920513 · P564 That is enough information to do it all. You can get a computer to specify the prime called $P564$ by getting it to calculate $F_{11},$ carefully typing in the four prime factors given above, then divide them out of $F_{11},$ the final quotient will be the desired $P564.$ As I said in comment, after early progress, this was finished by Richard P. Brent, and $P564$ was proved to be prime by F. Morain, all in 1988.
Munkres Topology Question article 17 problem 5
Indeed $[a,b]$ (assuming $a < b$, both in $X$) is always closed, as $$X \setminus [a,b] = \{x \in X: x < a \} \cup \{x \in X: x > b\}\;,$$ and by definition, both the sets on the right hand side are open. So that part is correct. $a \in X$ has a right neighbour iff there exists some element $y \in X$ such that $a,y$ and $(a,y) = \emptyset$; this $y$ is then often denoted $a^{+}$. We define a left neighbour $a^{-}$ similarly (so that $a^{-}<a$ and $(a^{-},a) = \emptyset$). Of course in some ordered sets, no right or left neighbours exist, like in the reals, or the rationals. In others, both exist (like in $\mathbb{Z}$), and in a set like $[0,1] \cup [2,3]$, 1 is the left neighbour of 2 etc. I claim that $a \in \overline{(a,b)}$ iff $a$ has no right neighbour. Proof: Suppose $a$ has no right neighbour. Let $O$ be an open subset that contains $a$. If $a$ is the minimum of $X$ (if it exists) this means that for some $y > a$ we have that $[a, y) \subseteq O$, and if $a$ is not minimal we have $z < a$ and $y > a$ such that $(z,y) \subseteq O$. In either case, define $y' = \min(y,b)$, the $y' > a$ and so $(a,y') \neq \emptyset$, which means that $O \cap (a,b) \neq \emptyset$, and so $a \in \overline{(a,b)}$. On the other hand, if $a$ does have a right neighbour $a^{+}$, $\{x \in X: x < a^{+}\}$ is open in $X$, and does not intersect $(a,b)$ (as a point in it would contradict $(a,a^{+}) = \emptyset$), so $a \notin \overline{(a,b)}$. So we have equivalence. Similarly, $b \in \overline{(a,b)}$ iff $b$ has no left neighbour. So we have equality of $\overline{(a,b)}$ and $[a,b]$ iff $a$ has no right neighbour and $b$ has no left neighbour.
Why is $\text{Li}(x)$ a much better estimate of $\pi(x)$ than $\frac{x}{\log x}$?
Not an answer but a comment which needed some space A table for some value of $n$ $ \begin{array}{r|r|r|r} n & \text{Li}(n) &\dfrac{\log n}{n}&\pi(n)\\ \hline 1000000 & 78628 & 72383 & 78498 \\ 2000000 & 149055 & 137849 & 148933 \\ 3000000 & 216971 & 201152 & 216816 \\ 4000000 & 283353 & 263127 & 283146 \\ 5000000 & 348639 & 324151 & 348513 \\ 6000000 & 413077 & 384437 & 412849 \\ 7000000 & 476827 & 444123 & 476648 \\ 8000000 & 540000 & 503305 & 539777 \\ 9000000 & 602677 & 562053 & 602489 \\ 10000000 & 664919 & 620421 & 664579 \\ 100000000 & 5762210 & 5428682 & 5761455 \\ 200000000 & 11079975 & 10463629 & 11078937 \\ 300000000 & 16253410 & 15369410 & 16252325 \\ 400000000 & 21337379 & 20194906 & 21336326 \\ 500000000 & 26356833 & 24962409 & 26355867 \\ 600000000 & 31326046 & 29684689 & 31324703 \\ 700000000 & 36254243 & 34370014 & 36252931 \\ 800000000 & 41147863 & 39024158 & 41146179 \\ 900000000 & 46011649 & 43651380 & 46009215 \\ 1000000000 & 50849235 & 48254943 & 50847534 \\ 10000000000 & 455055615 & 434294482 & 455052511 \\ 20000000000 & 882214880 & 843205936 & 882206716 \\ 30000000000 & 1300016132 & 1243550986 & 1300005926 \\ 40000000000 & 1711964132 & 1638528672 & 1711955433 \\ 50000000000 & 2119666540 & 2029608840 & 2119654578 \\ 60000000000 & 2524048318 & 2417638082 & 2524038155 \\ 70000000000 & 2925709820 & 2803166336 & 2925699539 \\ 80000000000 & 3325071593 & 3186579089 & 3325059246 \\ 90000000000 & 3722444262 & 3568161225 & 3722428991 \\ 100000000000 & 4118066401 & 3948131654 & 4118054813 \\ \end{array} $
Showing that $\sum_{\nu = 0}^\infty \frac{(-1)^\nu}{z + \nu}$ is locally uniformly convergent
Let $s_N(z)$ denote the partial sum of your series. Note that $$s_{2N+1}(z) = \frac{1}{z(z+1)}+\cdots+\frac{1}{(z+2N)(z+2N+1)}$$ while $$s_{2N}(z) = \frac{1}{z(z+1)}+\cdots+\frac{1}{(z+2N-2)(z+2N-1)}-\frac{1}{z+2N}$$ Using this, you can now argue with Weierestrass' idea and use $\sum (z+\nu)^{-2}$ converges.
Linear Algebra - Complex equation
The modulus of $i$ is $1$: for every $z\in\mathbb{C}$, $|z|$ is a nonnegative real number: $$ |z|=\sqrt{z\bar{z}} $$ For $z=i$, $$ |i|=\sqrt{i\bar{i}}=\sqrt{i(-i)}=\sqrt{-i^2}=\sqrt{1}=1 $$ The argument is indeed $\pi/2$, so the square roots of $i$ are $$ 1\left(\cos\frac{\pi}{4}+i\sin\frac{\pi}{4}\right)= \frac{1}{\sqrt{2}}+i\frac{1}{\sqrt{2}} $$ and its opposite $$ 1\left(\cos\left(\frac{\pi}{4}+\pi)\right)+ i\sin\left(\frac{\pi}{4}+\pi)\right)\right) =-\frac{1}{\sqrt{2}}-i\frac{1}{\sqrt{2}} $$
What is $N(t)$ in the definition $E(t) = z(t)+ \frac{N(t)}{\kappa} $ for an evolute?
$N(t):= T'/\kappa$ is the unit normal to the given curve (see here ). If you have Elementary Differential Geometry by O'Neil (revised 2nd edition) then you'll see the evolute appears in question 13 of section 2.4 (page 79), and $N(t)$ is defined at the start of section 2.3 (page 59). Also see a brief discussion here
A problem on Universal enveloping algebra from Humphreys' book
It suffices to find an $L$-module $M$ on which $L$ acts faithfully, that is $l\cdot m=0$ for all $m\in M$ entails $l=0$. A module for the Lie algebra $L$ is the same as a module for the ring $\mathfrak{U}(L)$. If $L$ acts faithfully on $M$, then $i(l)$ acts non-trivially on $M$ for nonzero $l\in L$ and so $i(l)\ne0$. So now all one has to do is find linearly indepdendent matrices $X$ and $Y$ with $XY-YX=X$. I think that's possible....
Spot error in proof: If A×B⊆C×D , then A⊆C and B⊆D
Let $a$ be an arbitrary element of $A$ and let $b$ be an arbitrary element of $B$. If one of the sets is empty, you can't do that. In particular, if for example $A$ is empty but $B$ is not, then you will never reach the conclusion $b\in D$.
$\eta$-value of a partition and its meaning
Here's a story that might please you: Let $T$ be the set of all transpositions in $S_n$ and define a corresponding discrete Laplace operator $\Delta$ by \begin{equation} \Delta f ( \sigma ) \ = \ \sum_{\tau \in T} \, f ( \sigma ) - f ( \tau \cdot \sigma ) \end{equation} for any complex-valued function $f: S_n \longrightarrow \Bbb{C}$ and permutation $\sigma \in S_n$. It's not hard to check that $\Delta f$ is a class funtion whenever $f$ is. For $\sigma \in S_n$ consider its transposition length $l(\sigma)$, namely the minimal number of transpositions required to factorise $\sigma$. The mapping $\sigma \mapsto l(\sigma)$ is evidently a class function and furthermore $l(\sigma) = n-k$ where $k$ is number of parts of the partition $\lambda$ which encodes the cycle type of $\sigma$. If my calculations are correct then \begin{equation} \Delta l (\sigma) \ = \ 2 \eta\big( \lambda^\text{op} \big) \, - \, \binom{n}2 \end{equation} where $\lambda$ is the partition encoding the cycle type of $\sigma$ and $\lambda^\text{op}$ is the conjugate partition. yours, A. Leverkühn
Evaluate $\iint_{D}\sqrt{1-\frac{x^2}{a^2}-\frac{y^2}{b^2}}\text{d}x\text d y$
If the change is $x = a r\cos\theta$, $y = b r\sin\theta$: $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = r^2\cos^2\theta + r^2\sin^2\theta = r^2.$$
Prove that, for $x \in \mathbb R$ and $\delta_x > 0$, the open interval $(x-\delta_x, x+\delta_x)$ is itself an open set
Let $y \in (x-\delta_x,x+\delta_x)$. We must show that $y$ is an interior point of $(x-\delta_x,x+\delta_x)$. We have $|y - x| < \delta_x$. Therefore $r = \delta_x - |y-x| > 0$. We will prove that $(y-r,y+r) \subseteq (x-\delta_x,x+\delta_x)$, which will prove what we want. Let $z \in (y-r,y+r)$. Then $$|z - x| \leq |z - y| + |y - x| < r + |y - x| = \delta_x.$$ This proves that $z \in (x-\delta_x,x+\delta_x)$, hence that $(y-r,y+r) \subseteq (x-\delta_x,x+\delta_x)$. Therefore $y$ is an interior point of $(x-\delta_x,x+\delta_x)$, completing the proof.
Is $\sum\limits_{n=0}^{\infty} \frac{n}{n+1} (-1)^{n}$ convergent?
You've already noted @Tom explained that, if $\sum_{n\ge0}a_n$ converges, the triangle inequality implies $\lim_{n\to\infty}a_n=0$. Whatever name that test has, it proves your series diverges. Indeed$$a_n:=\frac{(-1)^nn}{n+1}\implies\lim_{n\to\infty}|a_n|=1\implies\lim_{n\to\infty}a_n\ne0.$$In fact $\lim_{n\to\infty}a_n$ doesn't exist, because the subsequences for even (odd) $n$ have respective limits $1$ ($-1$), which differ. (That test appears to have a name too.)
Prove statement is equivalent to little-o notation
To prove $f\in o(g)$, all we need to show is that for any positive number $\color{blue}c$, $\exists \color{blue}{n_0}$ such that $\forall n\ge \color{blue}{n_0}$, we have $f(n) \color{red} < \color{blue}c · g(n)$. For this, we first choose a positive number $\color{blue}c$. Now if $f\in \bar o(g)$, then for any positive number $\color{green}{c'}$, $\exists \color{green}{n_0'}$ such that $\forall n\ge \color{green}{n_0'}$, we have $f(n) \color{red}\le \color{green}{c'} · g(n)$. Since the statement says we can pick any positive number, we cleverly choose $c'=\dfrac{c}{2}$. With this we have, $$ f(n) \le \frac{c}{2} · g(n)\quad\forall n\ge n_0' $$ Now, we know that $\dfrac{c}{2}<c$ which means $\dfrac{c}{2}\cdot g(n)<c\cdot g(n)$ so we write the above expression as $$ f(n) \le \frac{c}{2} · g(n)<c\cdot g(n) $$ which implies, $$ f(n)<c\cdot g(n)\quad \forall n\ge n_0' $$ We started with any $\color{blue}{c}$ and found a corresponding $\color{blue}{n_0}(=\color{green}{n_0'})$ such that $f(n) \color{red} < \color{blue}c · g(n),\forall n\ge \color{blue}{n_0}$. Hence $f\in o(g)$. So it doesn't matter that we have used $\dfrac{c}{2}$ in the condition of $f\in \bar o(g)$ (we could have chosen $\dfrac{c}{3},\dfrac{c}{4},\dfrac{c}{5}$ etc.). We just wanted to arrive at a strict inequality $<$ from $\le$.
Definition of $\text{Hom}^{G}(V_{1} , V_{2})$
Yes, $\phi$ is just a map of $k[G]$ modules. It's a very common abbreviation to call a morphism of $R$-modules an $R$ homomorphism, or an $R$ linear map. In this specific case, a $k[G]$ morphism is equivalently just a $k$ linear map $V_1 \longrightarrow V_2$ such that $\phi(g v) = g \phi(v)$ for $g \in G$ and $v \in V_1$.
If a, b ∈ Z are coprime show that 2a + 3b and 3a + 5b are coprime.
If integer $d$ divides $2a+3b,3a+5b$ $d$ must divide $-3(2a+3b)+2(3a+5b)=b$ $d$ must divide $5(2a+3b)-3(3a+5b)=a$ $d$ must divide $(a,b)$
The Fundamental group of the circle from "Introduction to knot theory", Ralph H. Fox (1')
I do not understand what you mean exactly by $\phi^{-1} \phi (X)$ is given in this form but if you would like some intuition on its use in the proof, I can provide that. To show the proposition "the image under of any open subset of is an open subset of /3 (5.2)" is true it is sufficient to show that the inverse function is continuous. Our goal is now to show this using what we know to be true about the function phi, namely: $\phi^{-1}\phi(X)=\bigcup_{n\in J}(3n+X)$ We know this to be true since the a little rewriting of the question gives us: $\phi(X)=\phi\left(\bigcup_{n\in J}(3n+X)\right)$ and all this does is notes the fact that anything of the form $3n, n\in J$ is in the kernel of $\phi$. This makes sense since if we use the wheel analogy given by Fox anything that is a multiple of 3 is in the same equivalence class as zero in the image of the map due to the fact after the wheel rolls a distance of 3 units, it has returned back to zero. Now once we have established this notion, it is clear to see that $\phi$ takes an open set $X$ and translates it to a collection of open sets $3n+X, \ n\in J$. We know the union of open sets is open and the conclusion follows as stated by Fox. The crux is now that since for a subset $B$, $B$ is open iff $\phi^{-1}(B)$ is open we have now shown $\phi^{-1}(B)$ to be open and it follows that $B$, aka $\phi(X)$ is also open.
Prove that $n ≤ d + 1$
Consider the Gram matrix of $d+2$ distinct unit vectors $v_1\ldots,v_{d+2}$ of $\mathbb{R}^d$: $$G_{ij}=\langle v_i,v_j\rangle.$$ Assume $\langle v_i,v_j\rangle =\left\{\begin{array}{ll} x, \text{ if } i\neq j \\ 1, \text{ if } i=j \end{array}\right. $ Since $v_i\neq v_j$, we have $x<1$. Notice that $G_{d+2\times d+2}=(1-x)Id_{d+2\times d+2}+xA_{d+2\times d+2}$, where $A_{ij}=1$ for every $i,j$. So the spectrum of $G_{d+2\times d+2}$ is $\ \ (1-x)+x(d+2),\ \stackrel{d+1 \text{ times}}{\overbrace{1-x,\ldots, 1-x}}$. The rank of $G$ is at most $d$, since $v_1,\ldots, v_{d+2}\in \mathbb{R}^d$. So the multiplicity of the eigenvalue $0$ of $G$ is at least two. Notice that the only value of $x$ that forces $G$ to be singular satisfies $(1-x)+x(d+2)=0$, but the multiplicity of $(1-x)+x(d+2)$ is one. Absurd. There are no $d+2$ distinct unit vectors of $\mathbb{R}^d$ with the same angle between any two of them.
Elementary demonstration; $p$ prime, $1 \lt a \lt p$, $\;1 \lt b \lt p \quad$ Then $ p\nmid a b$
You need some more than the definition of prime to prove this. The reason is that in other rings (systems of 'numbers' [they can actually be numbers, polynomials or any 'thing' that you can sum and multiply] with sum and product) there are elements that can be only divided (essentially) by $1$ and themselves, but there are pairs of this 'numbers' $a,b$ such that $p\mid ab$ but $p$ does not divide $a$ or $b$. A well-known example is $2\cdot3=(\sqrt {-5}+1)(-\sqrt {-5}+1)$. The best-trodden path for natural numbers is through Euclid's algorithm and Bezout's identity. It is not difficult to read, but kind of long to post here and easy to find.
Related Rates problem with two circles
I believe the easiest way to do this problem would be to choose an origin ($O$) at the point equidistant from the two mice ($A$, $B$) at their initial positions, which passes through the two centers ($C_1$, $C_2$) of the circles. This line would be $\overline{C_1 O C_2}$ The distance between each mouse and the origin ($\overline{OA}$ and $\overline{OB}$) should be easy to calculate using trigonometry and modelling them as points moving with some angular velocity around a circle. The distance between them is always the line $\overline{AB}$ (which passes through $O$), and if you find the rate of change of either of them, you have the rate of change of the other. I hope this makes sense.
Transformation that makes derivative of a function non-negative everywhere
An example intended to induce you to clarify your use of "analytical transformation": $$ g(f)(x) = \int_0^x \frac{\pi}{2} + \arctan\left( \frac{\mathrm{d}}{\mathrm{d}t}f(t) \right) \,\mathrm{d}t $$ This $g$ is analytic in the sense that it agrees with its power series on a neighborhood of $f \equiv 0$. As an example, for this $g$ and $f(x) = x^2$, $$g(f)(x) = \frac{\pi x}{2} + x \arctan(2x) - \frac{1}{4} \ln(4x^2 + 1) $$ having $$ \frac{\mathrm{d}g(f)(x)}{\mathrm{d}x} = \frac{\pi}{2} + \arctan{2x} \text{.} $$ Many others like this can be made along the recipe: pick a monotonically increasing function, $h$ on $\mathbb{R}$ with a lower horizontal asymptote (examples: $\arctan x$, $\mathrm{e}^x$, any sigmoidal function). Then let $m$ be the height of that lower horizontal asymptote and let $x_0$ be the $x$-intercept of $g(f)$. Then $$ g(f)(x) = \int_{x_0}^x m + h\left( \frac{\mathrm{d}}{\mathrm{d}t} f(t) \right) \,\mathrm{d}t $$ is such a $g$. Picking $h(x) = \mathrm{e}^x$, $m = 0$, $x_0 = -1$ and repeating $f(x) = x^2$, we get $$ g(f)(x) = \frac{\mathrm{e}^{2x+2} - 1}{2\mathrm{e}^2} $$ having $$ \frac{\mathrm{d}g(f)(x)}{\mathrm{d}x} = \mathrm{e}^{2x} \text{.} $$ Note that this recipe essentially slavishly enforces your monotonicity of derivatives and then positivity of the derivatives of the result by grabbing the derivatives, applying a monotonically increasing function to them, shifting them up so that the minimum possible of the transformed derivatives is zero, then integrating.
Finding 2nd solution of second order ODE
Function $x_1(t)=e^t$ is a solution of: $$tx''-(2t+1)x'+(t+1)x=0$$ Let us try the reduction of order approach, which is to try a second solution $x_2(t)=g(t)x_1(t)$. Inserting into the original DE, we have $$ t(g(t)x_1(t))''-(2t+1)(g(t)x_1(t))'+(t+1)(g(t)x_1(t))=0 $$ The derivatives are $$ (g(t)x_1(t))' = g'(t)x_1(t) + x_1'(t)g(t) $$ $$ (g(t)x_1(t))'' = g''(t)x_1(t) + 2g'(t)x_1'(t) + x_1''(t)g(t) $$ Substituting gives $$ g(t)\left(tx_1''(t)-(2t+1)x_1'(t)+(t+1)x_1(t)\right) + F(x_1,g,t) =0 $$ where $$ F(x_1,g,t) = t(g''(t)x_1(t) + 2g'(t)x_1'(t))-(2t+1)g'(t)x_1(t) $$ The term in parentheses two lines above is zero because $x_1$ satisfies the original DE, leaving $$ F(x_1,g,t)=0 $$ $$ t(g''(t)x_1(t) + 2g'(t)x_1'(t))-(2t+1)g'(t)x_1(t) = 0 $$ Now since, $x_1=x_1'=e^t$, that term cancels everywhere, leaving $$ t(g''(t) + 2g'(t))-(2t+1)g'(t) = 0 $$ There are two $2tg'$ terms that also cancel, leaving $$ tg''(t)-g'(t) = 0 $$ Letting $h=g'$ reduces this to a first order DE: $$ th'(t)-h(t) = 0 $$ Separation gives $$ \frac{1}{h(t)}\frac{dh}{dt} = \frac{1}{t} $$ Now integrating $dt$ $$ \int \frac{1}{h} dh = \int \frac{1}{t} dt, $$ $$ \log(h) = \log(t)+c, $$ for some constant $c$. Exponentiating both sides gives $$ h = Ct, $$ For some other constant $C=e^c$. Since $g'=h$, we integrate $dt$ once more to get $g$ $$ g = At^2+B, $$ where $A=\frac{C}{2}$ and $B$ is another constant of integration. Now we put together our final solution $$ x_2(t) = (At^2+B)e^t $$ Plugging this into the DE verifies that it is a second solution. You can verify it's linear independence using the Wronkskian determinant: $$ \left| \begin{array}{cc} e^t & e^t \left(A t^2+B\right) \\ e^t & e^t \left(A t^2+B\right)+2 At e^t \\ \end{array} \right| = 2 A t e^{2 t}, $$ which does not identically vanish on any interval; so the solutions are independent.
Equivalent definitions of abelian categories - reference request
This sequence is basically giving us the image factorisation of $\phi$, and I'll assume that you've seen that you can always construct this factorisation in an abelian category. Our task is therefore to go the other way around and show that a category with this sort of image factorisation satisfies the three conditions in the definition of abelian category. Here's a sketch of the proof: Every morphism $\phi$ has a kernel and cokernel, because we need them to define the above sequence. Suppose $\phi$ is a monomorphism. Its kernel is the zero morphism, so the cokernel of that kernel is isomorphic (in the suitable sense) to the identity. Therefore $\phi$ is isomorphic in the same sense to $j$, the kernel of the cokernel. Thus $\phi$ is also the kernel of its cokernel. The situation for epimorphisms is dual to that for monomorphisms. Thus a category satisfying this image factorisation condition is an abelian ctageory.
Cauchy Riemann Equations for $h(z)=|z|^2z^2$
Yes. Analytic functions must satisfy the Cauchy Riemann equations. These do not.
Solving expression with multiple summation notations
This is just a start. As Ross Millikan observed, you don't need the limits. We have $$ Y = \sum^{100}_{j=0}\sum^{j}_{m=0}\sum^{100}_{q=0} \frac{\sigma^j}{e^\sigma j!(j-m)!}\frac{(-1)^q}{q!}A^q (\theta)^{2q-(j-m)+1}\frac{1}{p_o^q}=\\ e^{-\sigma}\sum^{100}_{j=0}\sum^{j}_{m=0}{\frac{\sigma^j\theta^{m-j+1}}{j!(j-m)!}}\sum^{100}_{q=0}{\frac{(-1)^q}{q!}\left(\frac{A\theta^2}{p_0}\right)^q} $$ The inner sum $\approx e^{-A\theta^2/p_0}$ with an error less than $\frac{1}{101!}\left(\frac{A\theta^2}{p_0}\right)^{101}.$ At least, this reduces the problem from a triple sum to a double sum.
Where does this inequality come from for $e^{-2}$?
As usual everything follows from $e^x \geq 1+x$. For $x \in \mathbb{R}$ and $n \in \mathbb{N}$ such that $1 + \frac{x}{n} > 0$ this inequality leads to $$ e^x = e^{\frac{x}{n} \cdot n} \geq \left(1 + \frac{x}{n} \right)^n $$ and taking reciprocals $$ e^{-x} \leq \left(1 + \frac{x}{n} \right)^{-n} = \left(1 - \frac{x}{n + x}\right)^n. $$ Take $x=1$ to get $$ \frac{1}{e} \leq \left(1 - \frac{1}{n+1}\right)^n $$ and therefore $$ \frac{1}{e^2} \leq \left(1 - \frac{1}{n+1}\right)^{2n}. $$
Embedding a finite group into symmetric groups
In fact the group $Z_q\rtimes\operatorname{Aut}(Z_q)$ embeds in $S_q$. See this page on Wikipedia about the holomorph.
Definition of solution of a differential equation and how do we know we obtained a solution?
If you are given the DE $y'(x)=f(x,y(x))$, then your task is to find all functions $g(x)$ such that $y(x)=g(x)$ is a solution. Such functions are not unique, but they all verify the DE. Example: $$y'(x)=y(x)$$ is solved by $$y(x)=Ce^x$$ where $C$ is an arbitrary constant. As you can check, for any $C$, $$(Ce^x)'=Ce^x.$$ If you are given a function $g(x)$, it is possible to find many DE that it fulfills. And every such equation has a family of solutions. Example: The function $$y(x)=e^x$$ is a solution of the DE $$y'(x)=e^x.$$ The other solutions of this equation are $$y(x)=e^x+C.$$ It is also a solution of $$y'(x)=\frac{e^{2x}}{y(x)}$$ and other solutions are $$y(x)=\pm\sqrt{{e^{2x}+C}}.$$ If you are given a single solution $g(x)$, you cannot retrieve $f$. For this, you need to know the whole family of functions. Example: From $$y(x)=Ce^x$$ you draw $$y'(x)=Ce^x=y(x)$$ and $$y'(x)=y(x)$$ is the only DE you obtain where $C$ has been eliminated.
Upper bound of a complex integral
Using the residue theorem, we find that $$\begin{align} A_n(x)&=e^{-x}\lim_{t\to 0}\frac{d^n}{dt^n}e^{xe^t}\\\\ &=e^{-x}\lim_{t\to 0}\left(\sum_{k=1}^n e^{xe^t}B_{n,k}(xe^t, xe^t, \dots, xe^t)\right)\\\\ &=\sum_{k=1}^n B_{n,k}(x,x, \dots, x)\\\\ &=B_n(x,x,\dots,x)\\\\ &=T_n(x)\\\\ &=\sum_{k=0}^n {n \brace k} x^k \end{align}$$ where $B_{n,k}(x_1,x_2,\dots, x_n)$ are the Bell polynomials, $B_n(x_1, x_2,\dots, x_n)$ is the $n$th complete exponential Bell polynomial, $T_n(x)$ is the Touchard polynomial, and ${n \brace k}$ are Stirling number of the second kind.
Volume of Revolution about y=1
$\int_0^1 (\pi (1-x)^2-\pi(1-\sqrt{x})^2 )dx$ This is what I did: Do bigger circle area minus smaller circle area like $\pi \cdot R^2-\pi \cdot r^2$
How to find the crossing number of a hypercube Q4?
The Crossing number of Hypercube Q4 is 8. Q4 can be constructed using two disjoint Q3 which is having a crossing number of 0, by adding an edge from each vertex in one copy of Q3 to the corresponding vertex in the other copy. The lower bound for the crossing number of Qn is 4n/20 + O(4n/20). The upper bound for the crossing number of Qn is (165/1024)*(4^n).
Interpretation of $\Omega_\pm(x^c)$
After searching a bit more, I found this question, and in a comment Peter Humphries explains that "$\Omega_\pm$ is standard notation in analytic number theory. $f(x)=\Omega_\pm{(g(x))}$ means that $\limsup_{x \to \infty} \frac{f(x)}{g(x)} > 0$, while $\liminf_{x \to \infty} \frac{f(x)}{g(x)} < 0$". From this definition, I conclude that the answer to my question would be NO, as my statement $-\sqrt{x}\leq\pi(x)-\sigma(x)\leq\sqrt{x}$ is stronger than $\pi(x)-\sigma(x)=\Omega_\pm(\sqrt{x})$.
Limit of sequence of sum of sinuses: $\lim \limits_{x \to \infty}{(\sin(1+x) - \sin(x))}$
$\sin (1+x)-\sin x = 2 \cos ( x+{1 \over 2}) \sin {1 \over 2}$. Since $\sin {1 \over 2} \neq 0$, we see that $x \mapsto \sin (1+x)-\sin x$ takes the values $\pm 2 \sin {1 \over 2}$ infinitely often, hence no limit exists.
$S/I$ is an integral extension of $R/R\cap I$
Let $\bar{s} \in S/I$ with $s \in S$. By hypothesis, $s$ is integral over $R$, hence there are $a_i \in R$ such that $s^n+a_{n-1}s^{n-1}+\cdots+a_1 s + a_0=0$. Now this equation modulo $I$ becomes $\bar{s}^n+\bar{a}_{n-1}\bar{s}^{n-1}+\cdots+\bar{a}_1 \bar{s} + \bar{a}_0=0$ where we view the coefficients $\bar{a}_i$ as elements of the $R$-module $(R+I)/I$. Now by the third isomorphism theorem we have $(R+I)/I \cong R/(R\cap I)$ and we are done. Alternatively and more consistently with your notation, let $\pi:S \rightarrow S/I$ be the natural projection. For $s \in S$ there exists monic $p(x) \in R[x]$ such that $p(s)=0$. Now $p(s)$ is an element of $S$ and its image under $\pi$ is $p^{\pi}(\pi(s))=0$, where $p^{\pi}$ denotes the monic polynomial that we obtain from $p(x)$ be applying $\pi$ to its coefficients. Now use the third isomorphism theorem to see that actually $p^{\pi}(x) \in R/(R\cap I)[x]$.
Topology on Euclidean Group
Topologically the euclidean group is the product $O_n(\mathbb R)\times \mathbb R^n$ (but beware that the group structures are different: the euclidean group is a semi-direct product of $O_n(\mathbb R)$ and $\mathbb R^n$, not the direct product.) Of course $ \mathbb R^n$ has its usual metric structure and $O_n(\mathbb R)$ has the induced topology from the inclusion $O_n(\mathbb R)\subset M_n(\mathbb R)\cong \mathbb R^{n^2}$.
Limit that appears in particle physics calculation
As $\delta(x)$ is not a function you cannot do a simple algebraic manipulation. What you can do is to verify the properties of the delta "function". You can prove that for $x \neq 0$ your limit is $0$. You can also prove that $\int\frac{\epsilon\ dx}{x^2+\epsilon^2}=\arctan(\frac x\epsilon)+c,$ so the integral from $-\infty$ to $\infty$ is $\pi$, or the limit of the integral over any interval that crosses $0$ is $\pi$
Using dominated convergence to prove partial derivative and integral can be interchanged
First of all, does $f(x,y)$ continuous in $\mathbb{R}^2$ imply it is continuous at a fixed $x$ as a function of $y$? Yes, that's correct and holds for any continuous function $f: \mathbb{R}^2 \to \mathbb{R}$. For a fixed point $(x_0,y_0)$ continuity at $(x_0,y_0)$ means that for all $\epsilon>0$ there exists $\delta>0$ such that $$|f(x,y)-f(x_0,y_0)| \leq \epsilon \qquad \text{for all} \, \, |(x_0,y_0)-(x,y)| \leq \delta. \tag{1}$$ Now for fixed $x_0,y_0$ and $\epsilon>0$ choose $\delta>0$ as above, then we have $$|f(x_0,y)-f(x_0,y_0)| \leq \epsilon \qquad \text{for all} \, \, |y-y_0| \leq \delta$$ which shows that $f(x_0,\cdot)$ is continuous at $y=y_0$. Since $x_0,y_0$ are arbitrary, this finishes the proof. If so does this imply $f$ is uniformly continuous on $[a,b]$ as a function of $y$? Yeah, any continuous function which is defined on a compact interval is uniformly continuous, see this question; however, this is not needed for the proof of the assertion. If so then $f$ is bounded by its supremum on the finite interval and hence its integral exists? Correctly. (For this we don't need absolute continuity; continuity is enough.) Note that the continuity also implies measurability of $f(x,\cdot)$ for each $x$. Do we turn this problem into a more tractable form by changing the derivative into a limit of sequence of functions $n(f(x+1/n,y)-f(x,y))$. Yes, exactly. Set $$F(x) := \int_{[a,b]} f(x,y) \, dy.$$ We have to show that $$\frac{\partial}{\partial x} F(x) = \int_{[a,b]} \partial_x f(x,y) \, dy. \tag{2}$$ First of all, note that $F$ is well-defined because $f(x,\cdot)$ is continuous (see above) and, moreover, $$\frac{F(x+1/n)-F(x)}{1/n} = n \int_{[a,b]} (f(x+1/n,y)-f(x,y)) \, dy.$$ For fixed $x$ we define a sequence of auxiliary functions by $$u_n(y):= n (f(x+1/n,y)-f(x,y)).$$ Since $f(\cdot,y)$ is, by assumption, differentiable, we have $$u_n(y) \xrightarrow[]{n \to \infty} \partial_x f(x,y).$$ On the other hand, the mean value theorem shows $$|u_n(y)| \leq \sup_{\lambda \in [0,1]} |\partial_x f(x+\lambda/n,y)| \leq \sup_{y \in [a,b]} \sup_{|u-x| \leq 1} |\partial_x f(u,y)|.$$ Since $\partial_x f$ is continuous, the right-hand side is a (finite) constant and therefore integrable on the finite interval $[a,b]$. This means that we have found an integrable dominating function for $u_n$. Applying the dominated convergence theorem, we find that $$\frac{F(x+1/n)-F(x)}{1/n} = \int_{[a,b]} u_n(y) \, dy \xrightarrow[]{n \to \infty} \int_{[a,b]} \partial_x f(x,y) \, dy.$$ This proves $(2)$.
Preimage of $f:\mathbb{R}^n\rightarrow\mathbb{R}^k$ with >1 dimension having equal value in $\mathbb{R}^k$
This seems to be false. Let $n = 2, k = 3$, and $$ f(x, y) = (x, x, y) $$ Then $$ J_f = \pmatrix{1 & 1 & 0 \\ 0 & 0 & 1} $$ has rank $2 = n$ everywhere, so $l(G) = 0$, as required. On the other hand, $f^{-1}(A)$ is all of $\Bbb R^2$, hence has infinite measure in $\Bbb R^2$. This example is easily generalized to any higher dimension with $n < k$.
How to find the force that has to be pulled to a set of two blocks given the condition that a wire can break?
I don't see any contradiction. From the final equation $9a^2=18a$ we can find the solution $a=2$ (for $a\ne0$), that can be used to find $F$. (But note that the correct equation is $mg-2F=ma$ because the cable is tied to the roof)
Confused about notation of $\sin^2 \theta$
I think your eyes skipped the "-" in the second expression. $$(a + b) (a - b) = a^2 - b^2$$ And so: $$ 1 - \sin^2 \theta = (1 + \sin \theta) (1 - \sin \theta)$$ $ 1 - \sin^2 \theta = 1 - (\sin \theta)^2$ is correct.
Is $Cov(A,BA)$ equal to $Var(A)E(B)$ when $A$ is a vector and $B$ is a matrix?
Since we are dealing with vectors and matrices, we must be careful with operations like squares and so on. The calculation would go like this I think: \begin{eqnarray*}Cov(A,BA)&=&E[(A-E[A])(BA-E[BA])^T]\\ &=&E[(A-E[A])(B(A-E[A]))^T]\\ &=&E[(A-E[A])(A-E[A])^TB^T]\\ &=&Cov(A)E[B]^T\end{eqnarray*} Here $Cov(A)$ is the covariance matrix of the vector $A$, which corresponds to $Var(A)$. So the result is basically what you expected, but we need to respect matrix rules.
Show that the points $A(-2\hat{i}+3\hat{j}+5\hat{k}), B(\hat{i}+2\hat{j}+3\hat{k})$ and $C(7\hat{i}-\hat{k})$ are collinear
Yes you can. Also, we can use the following. $$\vec{AB}=3\vec{i}-\vec{j}-2\vec{k}$$ and $$\vec{AC}=9\vec{i}-3\vec{j}-6\vec{k}=3\vec{AB},$$ which says $AB||AC$ and these points are collinear.
Does $\sum_{k=1}^{\infty}\frac{k!}{k^k}$ converge?
For $n\ge 3$ we have $$\frac{n!}{n^n}\le\frac{1\cdot 2}{n^2}$$ Since the series $\sum n^{-2}$ converges, your series converges, too.
Lagrange Multiplier Question and my attempt
There must be some constraints on $x,y,z$ Without using Lagrange Multiplier Assuming $x,y,z\gt0$ Using AM-GM $$\frac{x+y+z}{3}\ge\sqrt[3]{xyz}$$ $$x+y+z=a\ge3\sqrt[3]{xyz}$$ $$\frac{a^3}{27}\ge xyz$$ and equality occurs when $x=y=z$
Determinant of an $n×n$ identity matrix with two arbitrary rows
Computing directly that determinant, if the entries as $a_{ij}$ where for $j>2$, then calling the matrix $A$ and the determinant of the attached matrices: $$|A|= a_{11}\cdot A_{11} + a_{12}\cdot A_{12}$$ Being the first two rows the rows that are changed. We now have that $A_{11}=a_{22}$ and $A_{12}=a_{21}$. So finally the determinant of that matrix is the cross multiplication or that four elements: $$|A| = a_{11}\cdot a_{22}+ a_{12}\cdot a_{21}$$
Let $M$ be a free $R$-module, and let it have an infinite basis. Then all bases of $M$ have the same cardinality
If a free module has an infinite basis, it is not finitely generated; so any two bases must be infinite. Suppose $B$ and $C$ are bases. For each $b\in B$, there is a finite subset $C(b)$ of $C$ such that $b$ is a linear combination of the elements in $C(b)$. Thus we have a map $b\mapsto C(b)$; then $|B|\ge|\{C(b):b\in B\}|$. Since each $C(b)$ is finite, we have $$ \Bigl|\bigcup_{b\in B}C(b)\Bigr|\le \aleph_0|B|=|B| $$ Now, prove that $$ \bigcup_{b\in B}C(b)=C $$ By symmetry, $|B|\le|C|$.
Is it possible to transform a periodic function to a linear one?
Let the periodic function be $f(x)$. One way to approach this would be to set $$ g(s):=\int_0^s(f(x)-\inf f(x))\,\text dx $$ For example, consider $f(x)=\sin x$. The infimum of $\sin$ is $-1$. So we would have $$g(s)=\int_0^s(\sin(x)-(-1))\,\text dx=\left.x-\cos x\right|_0^s=s+1-\cos s$$ The result:
Is it true that $G$ is abelian?
Let $G=S_{3}$ and $H=A_{3}$. A counterexample!. Added as PS : A quotient group is not necessarly abelian, take $G=S_{3}$ and $H=\lbrace identity \rbrace$, $H$ is normal, but $G/H \cong S_{3}$ and non abelian.
to find the ration of lenght n wide in term of k
Hint: let the rectangular part be $L$ high and $W$ wide. The height of the semicircle is $W/2$, so $L+W/2=H$ The light admitted is $LW$ plus $k$ times the area of the semicircle, so use the constraint to get the light admitted as a function of $W$. Then differentiate, set to zero, ...
Why is $\oint_{C_{r_{0}}} \mathbf{v} \cdot \mathbf{r}' \mathrm ds$ called the circulation of the flow around $C_{r_{0}}$?
This comes from understanding the geometric interpretation of the dot product and thinking about what would happen in the event of a purely rotational vector field. The dot product $\mathbf{a}\cdot\mathbf{b}=\|\mathbf{a}\|\|\mathbf{b}\|\cos\theta$ is a scaled version of $\mathbf{a}$'s projection onto $\mathbf{b}$ and vice-versa. In the event that $\mathbf{b}$ is a unit vector, it is exactly just the orthogonal projection of $\mathbf{a}$ onto $\mathbf{b}$. That is, if we decompose $\mathbf{a}=\mathbf{a}_{\|}+\mathbf{a}_{\perp}$ into components parallel and perpendicular to $\mathbf{b}$ respectively, then the dot product is $\mathbf{a}\cdot\mathbf{b}=(\mathbf{a}_{\|}+\mathbf{a}_{\perp})\cdot\mathbf{b}=\mathbf{a}_{\|}\cdot\mathbf{b}$ which will be the signed length of $\mathbf{a}$'s projection onto the oriented $\mathbf{b}$-axis. Say you have a constant vector field $\mathbf{F}$ in the plane which represents liquid flowing in a vaguely northernly direction (i.e. the $y$-coordinate of $\mathbf{F}$ is positive). If $\mathbf{F}$ points straight up, then the amount of liquid that passes through a unit length horizontal line segment will be $\|\mathbf{F}\|$, however if the direction of the flow is angled at all it is intuitively clear that less liquid will pass through the line segment per unit time. More precisely, the rate of flow through the horizontal line segment will be the dot product of $\mathbf{F}$ with the unit normal pointing straight north - this represents the vertical component of the flow. (If $\mathbf{F}$ were southernly-directed, this rate would be negative.) Now imagine there is a mystical horizontal line with a ball on it. The line magically negates all vertical force acting on the ball, but allows horizontal force to act on it. So when the liquid is moving, the ball doesn't move up or down at all, it only moves left and right. The rate of change of how much it moves to the right (east) is the dot product of $\mathbf{F}$ with the unit vector pointing east. The moral here is that the dot product $\mathbf{F}\cdot\mathbf{s}$ will measure the component of the field $\mathbf{F}$ along the direction of $\mathbf{s}$. If you want to measure the flow across a surface, for instance, one must take the dot product $\mathbf{F}\cdot\mathbf{n}$ of the vector field $\mathbf{F}$ with the surface's unit normal vector $\mathbf{n}$, yielding the Divergence Theorem. If instead you want to measure how much $\mathbf{F}$ flows around a circle, say, you need to completely discard the components of $\mathbf{F}$ that are moving across the boundary (the components parallel to the unit normal) and settle on the components of $\mathbf{F}$ that are moving along te boundary (the components parallel to the unit tangent vectors). This yields the definition of circulation.
phase flow of a non-uniform oscillator $\dot{\theta} = \mu + \cos{\theta} + \cos{2\theta}$ and possible error in posted solution
Use that $$\cos(x)+\cos(y)=2 \cos \left(\frac{x}{2}-\frac{y}{2}\right) \cos \left(\frac{x}{2}+\frac{y}{2}\right)$$