title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Curve Fitting to Represent Any Data
I could be wrong but I'm pretty sure this is alot like how JPEG image compression works. They take in a bunch of data points (pixels) and using the principles of Fourier transforms figure out which combination of Cosine curves when added will produce the original data. Here is a nice link http://nautil.us/blog/the-math-trick-behind-mp3s-jpegs-and-homer-simpsons-face The more modern compressors use "Wavelets" instead of Cosine but the basic idea still sounds really similar to your idea.
Generating function for all partitions of given length.
You are counting the number of partitions of an integer $r$ into $m$ parts. (See, for instance, section 3.16 of Wilf's generatingfunctionology.) Then, accounting for the number of ones, the number of twos, the number of threes,..., the number of partitions of $r$ into $m$ parts would be the coefficient $x^ry^m$ of $$(1+xy+(xy)^2+(xy)^3+\cdots)(1+x^2y+(x^2y)^2+\cdots)(1+x^3y+(x^3y)^2+\cdots)\cdots$$ or equivalently $$\prod_{i\ge1}\frac{1}{1-x^iy}.$$
Does improper integral exists or not?
For $x \in [0, \pi]$, we have $\left| \frac{2x}{\pi} - 1 \right| \leq \left| \cos x \right| \leq 1 $ and hence $$ 2\log \left| \frac{2x}{\pi} - 1 \right| \leq \log(\cos^2 x) \leq 0. $$ In particular, substituting $u = \frac{2x}{\pi} - 1$ gives $$ \int_{0}^{\pi} \left| \log(\cos^2 x) \right| \, \mathrm{d}x \leq - 2 \int_{0}^{\pi} \log \left| \frac{2x}{\pi} - 1 \right| \, \mathrm{d}x = -\pi \int_{-1}^{1} \log \left|u\right| \, \mathrm{d}u = 2\pi $$ and thus $\log(\cos^2 x)$ is integrable on $[0, \pi]$ by the comparison test. Then \begin{align*} \int_{0}^{\infty} \left| e^{-x} \log(\cos^2 x) \right| \, \mathrm{d}x &\leq \sum_{n=0}^{\infty} \int_{n\pi}^{(n+1)\pi} e^{-n\pi} \left| \log(\cos^2 x) \right| \, \mathrm{d}x \\ &= \frac{1}{1 - e^{-\pi}} \int_{0}^{\pi} \left| \log(\cos^2 x) \right| \, \mathrm{d}x \\ &< \infty, \end{align*} and therefore the improper integral converges. In general logarithmic singularity does not pose any issue for local integrability. In OP's case, the singularities of the integrand $e^{-x}\log(\cos^2 x)$ at $x = n\pi + \frac{\pi}{2}$ for $ n \in \mathbb{Z}$ are "benign", and so, only the singularity at $x=\infty$ matters.
Clarification in a proof that $n!\leq n^n$
$(n+1)!=(n+1)n!$ is valid by definition. $(n+1)n! \le (n+1)n^n$ comes from the induction hypothesis. From $n<n+1$ we get $n^n <(n+1)^n$ and therefore $(n+1)n^n<(n+1)^{n+1}$
Uniqueness of ring product
It's true under the slightly weaker assumption that each of the $A_i$ and $B_i$ have no nontrivial idempotents; a commutative ring $R$ satisfying this property is called connected, because it's equivalent to $\text{Spec } R$ being connected in the Zariski topology. Conceptually the idea is that, if we write $R = \prod A_i \cong \prod B_i$, then $\text{Spec } A_i, \text{Spec } B_i$ must be two lists of the connected components of $\text{Spec } R$ (as an affine scheme), so must be the same up to permutation. I mention all this by way of motivation; it's possible to give a completely elementary proof in terms of idempotents without knowing any algebraic geometry (although I find it a little unmotivated without the geometric picture), as follows. Let $e = \prod A_i$ be an idempotent. The condition $e^2 = e$ gives that each component of $e$ is either $0$ or $1$, so $\prod_{i=1}^n A_i$ has exactly $2^n$ idempotents; this already gives $n = m$. A nonzero idempotent is primitive if it can't be written as the sum of two other nonzero idempotents. The primitive idempotents of $\prod A_i$ are exactly those equal to $1$ in exactly one index and equal to $0$ otherwise, and an isomorphism $\prod A_i \cong \prod B_i$ sends primitive idempotents to primitive idempotents. Given an idempotent $e$ in a commutative ring $R$ we can consider the quotient of $R$ by $1 - e$, which can naturally be identified with $eR$; the quotients corresponding to primitive idempotents in $\prod A_i$ recover each of the factor rings $A_i$, so an isomorphism $\prod A_i \cong \prod B_i$ sends each $A_i$ to some $B_{\sigma(i)}$ as desired.
A sequence with variables, find the $mn^{th}$ term given the $m^{th}$ and the $n^{th}$?
There is a mistake that disappears. From $nx=mx$ you conclude $n=m$, which you cannot unless you know $x \neq 0$. Immediately after, you conclude $x=0$, which is all you use afterward. As others have remarked in the comments, you have to assume $m \neq n$ or you don't have enough information, then you can conclude $x=0$. Now given your result, the $i^{\text{th}}$ term is $\frac i{mn}$ and the sum of the first $mn$ terms is $\frac {mn(mn+1)}{2mn}=\frac {mn+1}2$ as desired.
Confusions about the proof of representations of isometries as products of reflections.
I assume that you are aware of the fact that the "isometries of $\Bbb R^n$ that fix the origin" are exactly the "linear orthogonal transformation in $\mathrm O(\Bbb R^n)$". So, besides $u=v-w$, let us define $u'=v+w$, which satisfies $$\langle u,u'\rangle = \langle v-w,v+w\rangle = \|v\|^2-\|w\|^2 = 0.$$ We used that $f$ is an isometry via $\|w\|=\|w-0\|=\|f(v)-f(0)\| = \|v-0\|=\|v\|$. So we see that $u'$ must be contained in the reflection hyperplane of $r_u$, and so we have $r_uu'=u'$. Thus $$r_u f(v) = r_u w = r_u(-u/2+u'/2) = u/2 + u'/2 = v.$$ So indeed, $r_u f$ fixes $v$, and since $r_u$ and $f$ are linear, it also fixes $\Bbb R v$ pointwise. The next is a general result from representation theory (of finite groups), but it applies to single transformations as well: If an orthogonal map, let's call it $r\in\mathrm O(\Bbb R^n)$, fixes a subspace $U\subseteq\Bbb R^n$ setwise, then it also fixes its orthogonal complement $U^\bot$ setwise: for every $u\in U$ and $u'\in U^\bot$, we have $r^{-1}u\in U$ and $$\langle u, ru'\rangle = \langle r^{-1}u,u'\rangle = 0.$$ Thus $r u'\in U^\bot$. So since $r_u f$ fixes $\Bbb Rv$ pointwise (but also setwise), it also fixes the orthogonal complement of that (setwise), in particular, it restricts to an orthogonal map on this complement, and is thus an isometry on it.
Find an explicit formula for a conformal map
The map $$z\mapsto w=\sqrt{z}$$ (slit along ${\Bbb R}_+$ values in the upper half plane) takes $G$ to a half disk in the upper half plane $\{w: |w|< 1, \mbox{im }w>0\}$ . Then $$ w\mapsto u=\frac{1+w}{1-w}$$ (recall: Möbius maps circles and lines to circles and lines) maps the halfdisk to the quarter plane $\{u: \mbox{re } u >0, \mbox{im }u>0\}$. Then $$ u \mapsto v=u^2$$ takes you into the upper half plane. Finally, $$ v\mapsto \frac{v-i}{v+i}$$ maps you onto the unit disk.
Why is tangent infinite sometimes?
You can imagine the tangent in the unit circle as a line that tangency it in the start: $n\cdot 2\pi : n \in R$ That animation will give you the intuition for the asymptotes (where it goes to negative and positive infinity): You can see that when you go the the angle $\frac{\pi}{2}$ for example, you are in a point in the circle, that creating a line from the origin to it, you have the same inclination as the tangent line, and that's why your graph is tending to $\infty$ and you get that asymptote in the graph.
Number of non-identity elements of order $7$ in a group
Two subgroups of order $7$ either are identical or intersect at $e$. Therefore, the subgroups of order $7$ induce a partition on the set of elements of order $7$. There are exactly $6$ elements of order $7$ in each subgroup of order $7$. Therefore, there are $6n$ elements of order $7$ if there are $n$ subgroups of order $7$.
Same moments and boundedness of one rv implies same distribution
Say $X$ is bounded by some $M>0$. Then $E(|X|^n)^{1/n}\leq M$ and $\limsup_n \frac 1n E(|X|^n)^{1/n}=0$, so $\varphi_X$ (the characteristic function of $X$) is analytic at $0$ with infinite radius: $$\forall t\in \mathbb R, \varphi_X(t) = \sum_{n=0}^\infty \frac{(it)^n}{n!}E(X^n)$$ Note that $E(|Y|^{2n})^{1/(2n)} = E(|X|^{2n})^{1/(2n)}\leq M$ and by Lyapunov's inequality, $$E(|Y|^{2n-1})^{1/(2n-1)} \leq E(|Y|^{2n})^{1/(2n)}\leq M$$ hence $\limsup_n \frac 1n E(|Y|^n)^{1/n}=0$ and $$\forall t\in \mathbb R, \varphi_Y(t) = \sum_{n=0}^\infty \frac{(it)^n}{n!}E(Y^n)=\sum_{n=0}^\infty \frac{(it)^n}{n!}E(Y^n)=\varphi_X(t)$$ $X$ and $Y$ have the same characteristic distribution, hence the same distribution. Lemma: If $|X|\leq M$ a.s., $\varphi_X$ is analytic at $0$ with infinite radius. Proof: By Taylor's theorem with integral remainder $$\begin{aligned} \forall x\in \mathbb R, e^{ix} &= \sum_{k=0}^{n-1} \frac{(ix)^k}{k!} + \int_0^x \frac{(x-t)^{n-1}}{(n-1)!}i^ne^{it} dt \end{aligned}$$ hence $$\forall x\in \mathbb R, \left|e^{ix} - \sum_{k=0}^{n-1} \frac{(ix)^k}{k!}\right|\leq \frac{|x|^n}{n!}$$ Let $t\in \mathbb R$. By the reverse triangle inequality, $$ \left|E(e^{itX}) - E\left(\sum_{k=0}^{n-1} \frac{(itX)^k}{k!}\right) \right|\leq E\left| e^{itX} - \sum_{k=0}^{n-1} \frac{(itX)^k}{k!} \right| \leq \frac{|t|^nE(|X|^n)}{n!} \leq \frac{(M|t|)^n}{n!} \to 0 $$ Hence $\displaystyle \varphi_X(t) = \sum_{n=0}^\infty \frac{(it)^n}{n!}E(X^n)$.
For some alphabet $\sum = \{a,b\}$ what is the concatenation $\sum\sum$?
The empty string $\lambda$ is a string over the alphabet, not a member of the alphabet. Think of any programming language: If the characters in a string in the language consist of (for example) Unicode symbols, the alphabet corresponds to the set of Unicode symbols. The empty string is a valid string in the language, but there is no Unicode symbol corresponding to the empty string. So $\Sigma \Sigma = \{aa, ab, ba, bb\}$.
Estimation of analytic function with monic polynomials
Let $N$ the degree of the monic polynomial and $f(z)=a_0+\sum_{n\ge 1}{a_nz^n}, p(z)=z^N+\sum_{0\le k \le N-1}b_kz^k$, hence $\bar{p(e^{it})}=e^{-iNt}+\sum_{0\le k \le N-1}\bar{b_k}e^{-ikt}$, hence $e^{iNt}\bar{p(e^{it})}=1+\sum_{1\le m \le N}\bar{b_{N-m}}e^{imt}$, hence $f(0)=a_0=\dfrac{1}{2\pi} \int_{0}^{2\pi} e^{iNt}f(e^{it})\bar{p(e^{it})}dt$ Now take absolute values, use the usual integral inequality and you are done since $|p|=|\bar{p}|, |e^{iNt}|=1$
What is the time 550 minutes after 22:15?
$$\begin{align} 15\, &+ 22(60) = \rm\ start\ time\ (mins)\\ +\ \ \ 10\, &+\ \ 9(60) = 550 = \rm\ increment\\ \hline =\ \ 25\, &+ \color{#c00}{31}(60)\\ \equiv\ \ 25\, &+ \ \ \color{#c00}7(60)\!\pmod{\color{#c00}{24}(60)}\\[.1em] {\rm by}\quad\ \ &\color{#c00}{31}\equiv\color{#c00}7\quad\pmod{\color{#c00}{24}} \end{align}\qquad$$
Expectation value summation
Okay this is pretty straightforward, posting for any future viewers... you need to start by breaking down the sum into $$\sum_{p=0}^{\infty}x^p = \sum_{p=0}^{N-1}x^p + \sum_{p=N}^{\infty}x^p$$ then $$ \sum_{p=0}^{N-1}x^p = \sum_{p=0}^{\infty}x^p - \sum_{p=N}^{\infty}x^p$$ and we get $$x\frac{d}{dx}\ln\left(\frac{1-x^N}{1-x}\right) = x\frac{d}{dx}[\ln(1-x^N)-\ln(1-x)]$$ from which the result follows.
Exercise 20 on Number Fields (Marcus) - Chapter 3
I think I have arrived to a possible solution (correct me if I am wrong). Clearly it suffices to show that $r:=\sum_{k=1}^{f_i}r_{imk}\beta_{ik}\equiv 0\pmod {Q_i}$. We know that $\alpha_{im}r\in Q_i^{m}$ and by the definition of $\alpha_{im}$, it belongs to $Q_i^{m-1}$ but not to $Q_i^m$. Therefore the prime decomposition of the ideal $\left (\alpha_{im}\right )$ can be written as $Q_i^{m-1}A$ with $A$ an ideal not divisible by $Q_i$. Since $Q_i^m\mid \left (\alpha_{im}r\right )=Q_i^{m-1}A\left (r\right )$, then $Q_i\mid \left (r\right )$, therefore $r\in Q_i$ and we are done.
Why network flow satisfy the transitive property?
An $x,y$-flow of value $k$ is a flow with excess $-k$ at $x$ ($k$ more flow leaving than entering), excess $+k$ at $y$ ($k$ more flow entering than leaving) and excess $0$ at every other node (flow conservation). So if you add together a $u,v$-flow of value $k$ and a $v,w$-flow of value $k$ (edge by edge), you get something that's almost like a $u,w$-flow of value $k$. The only problem is that some edges might be used twice in the sum, exceeding capacity. Prove that if an edge $xy$ is used twice in the sum of the two flows, then the sum of the flows also contains a cycle containing $xy$. Then, we can subtract $1$ from the flow along each edge of the cycle, and avoid this problem. Repeat for every such edge, and you'll get an actual $u,w$-flow of value $k$.
Prove that $f(n)=o(g(n))$ implies there is computable $h(n)=o(g(n)) \in O_C(g(n))$ s.t. $f(n)=o(h(n))$
No, you cannot involve log or any other specific function in g, because you cannot know order difference between f and g. So let's put p = f/g, then p = o(1) and use real x instead of natural n as an argument. You need to prove that there exists some q that q = o(1) and p = o(q) Then h = g/q will be a function you are looking for. The answer is yes and I can give you some hint. Review the functions $F(x) = \frac{1}{f(1/x)}$ and $G(x) = \frac{1}{g(1/x)}$ for positive x around 0. Now you deal with a modulus of continuity. You can read about the subject here What you simply need is to find another modulus of continuity $H(x) = \frac{1}{h(1/x)}$ where G < H < F for small x > 0 Then h(x) will be the function you need. Of course, generally speaking F and G are not necessary modulus of continuity, but you can bring it to the form without change or order class. Also for all functions f(x) >> x (x = o(f)) after the conversion, corresponding F(x) won't be actual modulus of continuity, but it will be an element of the similar set of functions symmetric to all modula of continuity over F(x) = x. So if functions belong to the same set, you find modulus of continuity or its symmetric function in between. Otherwise, you can use h(x) = x This is not a proof, just an idea.
Find equation of parabola give aos and y-intercept
The point you propose to use, (-3/4,3) just follows from symmetry of the parabola about x = -3/8. So, you are not actually fixing a degree of freedom of the parabola. In a sense, the parabola can slide up and down. Let me give 2 examples, to illustrate. Consider, y = $64(x-3/8)^2/3$ which follows all the properties above, and y = $64(x+3/8)^2-6$ which again satisfies the properties. So, the solution to the question will not be a parabola, it will be a family of parabolas, $y_t(x) = t(x+3/8)^2+3-9t/64$.
Show that $\lim_{h\rightarrow 0}\frac{e^{ih}-1}{h}=i$
Hint: Use Euler's formula and split the limit into well known trigonometric limits.
Affine cone is a closed set
If $I$ is a homogenous ideal in $k[T_0,...,T_n]$ it gives a closed set of the projective space of dimension $n$, but you can forget that it is homogenous, and it will correspond to a closed set of the affine space of dimension $n+1$, this is the cone.
how to find a power series solution for this differential equation , by substitute odd and even number
The recurrence relation should be: $$a_{n+2}=-\dfrac {a_{n-1}}{(n+2)(n+1)} \text { for } n \ge 1$$ $$a_2=0$$ The solution can't be expressed with elementary known functions. You will need Airy's functions for the solution of the DE.
If $Z_1,Z_2,…$ are i.i.d., $X_0$ is independent of $Z_1,Z_2,…$ and $X_n=φ(X_{n-1},Z_n)$, then $X_0,…,X_{n-1}$ is independent of $φ(x,Z_n)$
Each of the variables $X_0,X_1,...,X_{n-1}$ is measurable w.r..t $\sigma \{X_0,Z_1,...,Z_{n-1}\}$ and $Z_n$ is independent of this sigma algebra. Hence $\phi (x,Z_n)$ is independent of this sigma algebra which makes it independent of $X_0,X_1,...,X_{n-1}$.
Intuitionistic Proof of $(a \Rightarrow b) \Rightarrow (\lnot b \Rightarrow \lnot a)$
Assume $a \to b$. Assume $\lnot b$. Assume $a$. Then $b$. Then $\bot$. Eliminating (3), $\lnot a$. Eliminating (2), $\lnot b \to \lnot a$. Eliminating (1), $(a \to b) \to (\lnot b \to \lnot a)$. I think everything is clear up to step 5; beyond that you need a clear grasp of the notion of conditional proof. (You also need to remember that $\lnot x$ is an abbreviation for $x \to \bot$.)
By changing the variable in the φ equation to x = cos φ, derive the self adjoint form of the Legendre equation
Since you have that $x=\cos \phi$ , then : $$\text { 1) }\frac {dx}{d\phi}=-\sin \phi$$ $$\frac {dx}{d\phi}=-\sqrt {1-\cos^2 \phi}=-\sqrt {1-x^2}$$ And also apply chain rule : $$\text { 2) }\frac {dP}{d\phi}=\frac {dP}{dx}\frac {dx}{d\phi} \text{ ,And , }\frac {d}{d\phi}=\frac {dx}{d\phi}\frac {d}{dx}$$ I think you can take it from there.
Are Hessian matrices always symmetric?
No, it is not true. You need that $\frac {\partial^2 f}{\partial x_i\partial x_j} = \frac {\partial^2 f}{\partial x_j\partial x_i}$ in order for the hessian to be symmetric. This is in general only true, if the second partial derivatives are continuous. This is called Schwarz's theorem.
How do I solve the following difference differential equation
Given $x_n(0)=0$. Plug in $t=0$ to get $x_n'(0)=0$. Differentiate the equation, plug in $t=0$ to get $x_n''(0)=0$, and so on: all derivatives of $x_n$ at $t=0$ are zero. But start with any function $x_0(t)$ with all derivatives $0$ at $0$., then recursively plug in to get solutions for all the other $n$ in terms of that. For example $x_0(t) = \exp(-1/t^2)$ with of course $x_0(0)=0$. $$ x_{{1}} \left( t \right) ={\frac { \left( x_{{0}} \left( t \right) g-x'_{{0}} \left( t \right) \right) \sqrt {2}} {2g}} $$ $$ x_{{2}} \left( t \right) ={\frac { 3\,x_{{0}} \left( t \right) {g}^{2}-4\, x'_{{0}} \left( t \right) g+x''_{{0}} \left( t \right) }{{g}^{2}}} $$ $$ x_{{3}} \left( t \right) ={\frac {15\,x_{{0}} \left( t \right) { g}^{3}-23\, x'_{{0}} \left( t \right) {g }^{2}+9\, x''_{{0}} \left( t \right) g-x'''_{{0}} \left( t \right) }{12{g}^{3}}} $$
smallest positive definite matrix
Good News: Your question is well studied. (not so) bad news: There is no closed form solution in general. Note that $A-cB$ is positive (semi) definite for $c=0$, thus $0$ is a lower bound. Define $F=A-cB$. Let $\lambda_{min}(.)$ denote the minimum eigenvalue of its argument. Closed Form Solution-- A closed form solution exists whenever $B$ is invertible and in that case, $c=\lambda_{min}(B^{-1}A)$. (please try on your own to prove it.) Iterative Solution-- Whenever $B$ is not invertible, you will need resort to some other technique. The key observation here is that your problem is convex and belongs to the class of semi-definite programming. Thus, any standard convex package, for eg: CVX (search google), should be able to solve it. If you are insistent over a custom line search algorithm, make the observation that the one dimensional function $f(c)$, defined as \begin{align} f(c)=\lambda_{min}(A-cB) \end{align} is concave function of $c$ over the real line. It is monotonically non-increasing function of $c$, if one restricts it to the non-negative values for $c$, which is the interesting range for you. Note that $f(0)\geq 0$ Thus, you are interested in the point where $f(c)$ crosses the $x-axis$, thus the standard bisection algorithm given herel will definitely help. It basically finds the zero-crossing point of the input function which is you what you need here.
A convergent sequence that is defined recursively
We have $a_n=\sum_{j=1}^{n-1}\frac{\sqrt{|a_j|}}{j^2}+a_1$. Let $M$ such that $M>|a_1|+\frac{\pi^2}6\sqrt M$. Assume that $|a_j|\leqslant M$ for all $1\leqslant j\leqslant n-1$. Then $$|a_n|\leqslant \sqrt M\sum_{j=1}^{n-1}\frac 1{j^2}+|a_1|<M.$$ As $|a_1|<M$, we are done.
Question about the tensor algebra
It is there so that this is an $F$-algebra; you need a ring homomorphism from $F$ to $T(V)$ in order to have this structure, and being an algebra over a field is a useful structural condition. You also just get better theorems if you don't mutilate $T(V)$ or $Sym(V)$ by removing the field! But I guess you could clarify your question - necessary for what? From a geometric point of view, it is natural to consider functions as a simple case of tensor fields or differential forms. This is relevant if you are thinking about the various derivatives defined on these algebras, often these are defined on the degree 0 part in the usual way and then extended by some sort of Leibnitz rule. So the constant functions are just a natural part of this ring, and it simplifies things to keep them in it. If you want to think about $Sym(V^*)$ as the algebraic functions on the affine space $V$, the constant functions are just there - they are the simplest functions! I can say more about modules over $k[t]$ than I can about modules over $k[t] \setminus k^*$, at least in part because the former are also vector spaces over $k$, and vector spaces are nice.
What is the Logic to be used for Solving this Problem?
From the equation of $x$ you can get $x=2+\frac{1}{x}$ i.e. $x^2=2x+1$ Then after plugging into given rational function you get $$\frac{3x^2+5x-3}{2x^2-4x+5}=\frac{6x+3+5x-3}{4x+2-4x+5}=\frac{11x}{7}.$$ Now solve quadratic equation for $x$ and plug it in here.
If $(\frac{Q}{P})=1$, then the congruence above doesn't have a solution.
Consider $P=15$ and $Q=17$. Note that the Legendre symbols $(17/3)$ and $(17/5)$ are both $-1$. So the Jacobi symbol $(Q/P)$ is equal to $1$. But the congruence $X^2\equiv 17\pmod{15}$ does not have a solution. For if it did, the congruence $X^2\equiv 17\pmod{3}$ would have a solution. But it doesn't. We conclude that if a Jacobi symbol $(Q/P)$ is equal to $1$, it is not necessarily the case that the congruence $X^2\equiv Q\pmod{P}$ has a solution. Of course there are always some $Q$ such that $(Q/P)=1$ and the congruence $X^2\equiv Q\pmod{P}$ has a solution. Take for example $Q=1$.
Completeness of a certain normed space
The sequence $f_n$ is uniformly convergent because of the norm you have chosen: $$\Vert f_n-f\Vert=\sup_{x}|f_n(x)-f(x)|.$$ Thus, for any $y$, $|f_n(y)-f(y)|\leq \sup_{x}|f_n(x)-f(x)|=\Vert f_n-f\Vert.$ So, if you insist that $\Vert f_n-f\Vert<\varepsilon/3,$ then $|f_n(y)-f(y)|<\varepsilon/3$ for every $y$.
Dual of generalized Reed-Solomon code
Clearly $$c=(1\cdot f(1),1 \cdot f(a),1\cdot f(a^2),\cdots,1\cdot f(a^{n-1}))=(f(1),f(a),f(a^2),\cdots, f(a^{n-1})),$$ and $$c'=(1\cdot g(1),a \cdot g(a),a^2\cdot g(a^2),\cdots,a^{n-1}\cdot g(a^{n-1}))=(g(1),ag(a),a^2g(a^2),\cdots, a^{n-1}g(a^{n-1})).$$ Where $f$ has degree at most $k-1$ and $g$ has degree at most $n-k-1.$ So the product has degree at most $n-2.$ One then needs to show that the product is actually the zero polynomial, using Lagrange interpolation. You can refer to John Hall's very useful Coding Theory notes (Chapter 5) at Michigan State, if you need more details.
Closed form for solution of $f(n) = an + b\sum\limits_{k=0}^{n-1}f(k)$
The most usual approach based on generating functions works. To wit, note that $f(0)=0$, $f(1)=a$, and that $$F(s)=\sum_{n=0}^\infty f(n)s^n$$ solves $$F(s)=\sum_{n=0}^\infty ans^n+b\sum_{n=0}^\infty \sum_{k=0}^{n-1}f(k)s^n=as\sum_{n=0}^\infty ns^{n-1}+b\sum_{k=0}^\infty f(k)\sum_{n=k+1}^{\infty}s^n$$ that is, $$F(s)=\frac{as}{(1-s)^2}+b\sum_{k=0}^\infty f(k)\frac{s^{k+1}}{1-s}=\frac{as}{(1-s)^2}+\frac{bs}{1-s}F(s)$$ which implies that $$F(s)=\frac{as}{(1-s)(1-(b+1)s)}=\frac{a}b\left(\frac1{1-(b+1)s}-\frac1{1-s}\right)$$ that is, $$F(s)=\frac{a}b\left(\sum_{n=0}^\infty(b+1)^ns^n-\sum_{n=0}^\infty s^n\right)$$ which indeed yields, for every $n$, $$f(n)=\frac{a}b\left((b+1)^n-1\right)$$
Simple explanation of Comb inequalities in TSP
Here $x(E(S))$ is the number of edges in the cycle whose vertices are both in set $S$ of vertices. For any set $S$ of vertices, $x(E(S)) = |S| - d(S)/2$, where $d(S)$ is the number of edges of the cycle with one vertex in $S$ and one out. Thus $$ x(E(H)) + \sum_i x(E(T_i)) = |H| + \sum_i |T_i| - \frac{1}{2} \left(d(H) + \sum_i d(T_i)\right)$$ The inequality then says $$d(H) + \sum_i d(T_i) \ge 3t + 1$$ For each $i$ let $d_i(H)$ be the number of edges with one vertex in $H \cap T_i$ and one outside $H$. Then $d(H) \ge \sum_i d_i(H)$. Now it's not hard to see that $d_i(H) + d(T_i) \ge 3$ (think of the various ways the cycle can enter and leave $H \cap T_i$ and $T_i \backslash H$), so that gives us $d(H) + \sum_i d(T_i) \ge 3t$. But $d(H)$ and $d(T_i)$ are all even and $3t$ is odd, so we can't have equality here: we must have $d(H) + \sum_i d(T_i) \ge 3t+1$.
Finding affine transformation
HINT : $$\frac{(x+1)^2}{2}+\frac{(y-1)^2}{\frac 12}=1$$ $$\Rightarrow \frac{x^2}{2}+\frac{y^2}{\frac 12}=1$$ $$\Rightarrow \frac{\left(\frac{\sqrt 2}{3}x\right)^2}{2}+\frac{\left(\frac{1}{4\sqrt 2}y\right)^2}{\frac 12}=1$$ $$\Rightarrow \frac{x^2}{9}+\frac{y^2}{16}=1$$
Show $|\langle x\rangle|=|\langle x^2\rangle|$ if the order of $x$ is odd.
Suppose $m > 0$ is such that $(x^2)^{m} = e$, i.e., $x ^ {2m} = e$. By Langrange, $\text{ord}(x) \mid 2m$, i.e., $2 n + 1 \mid 2m$ and therefore $2 n + 1 \mid m$. So, $m \geq 2n + 1$.
Find $\lim\limits_{n \to \infty}\int_0^1 \ln^m x\ln^n(1+x){\rm d}x$ where $m,n\in \mathbb{N^+}$.
Well, all I would do is elaborate on Hint: 0≤ln(1+x)≤ln2<1 And I would ask for your pardon as I am not yet in grasp with Latex inline notation. So, I would simply resort to posting an image. Solution to the problem Thus, from sandwich theorem/ sqeeze play answer is 0.
Image of a line not passing through the center of inversion
In response to a comment to the Q from the OP. $z\bar w+\bar z w+c=0 \iff 2Re (z\bar w)+c=0.$ So in order that this represents a line, and not the empty set nor a single point, we must have $c\in \Bbb R$ and $w\ne 0$. And in order that $0$ is not on this line we must have $c\ne 0.$ Let $z'=x+iy$ and $w=a+ib$ with $x,y,a,b \in \Bbb R.$ Then because $1/c\in \Bbb R$ we have, from the last displayed line in the Q, $$z'\bar w/c+\bar z' w/c+z'\bar z'=0 \iff$$ $$\iff (1/c)\cdot 2Re (z'\bar w)+|z|^2=0 \iff$$ $$\iff (1/c)\cdot 2(xa+yb)+x^2+y^2=0 \iff$$ $$\iff (x+a/c)^2+(y+b/c)^2=(a^2+b^2)/c^2 \iff$$ $$\iff |z'+w/c|^2=|w|^2/c^2\iff$$ $$\iff |z'+w/c|=|w|/|c|.$$ Since $|w|\ne 0,$ that is the equation of a circle $C$ centered at $-w/c$ (not $-\bar w/c$ as in the Q ) with radius $|w|/|c|.$ Note: The equation for $C$ is satisfied if $z'=0$ but there is no $z$ on the line that inverts to $0.$ The image of the line under the inversion is $C\setminus \{0\}.$
If $f:\mathbb{R}^{2}\to \mathbb{R}$ continuous, strictly increasing with $f(x(y),y)=0$ for unique $x(y)$ for each $y$
Since we already know that $f$ is strictly increasing in $x$, for a fixed $y$, there can be at most one $x$ for which $f(x, y)$ takes on any given value. So all that the $f(x(y), y) = 0$ condition adds is a requirement for all $y$, there exists an $x$ for which $f(x, y) = 0$. Now consider $f(x, y) = x^3 + y$. This satisfies both conditions, so your claim is that $f(x, y) = F(x + \sqrt[3]y)$ for some $F$. But letting $y = 0$ gives $F(x) = x^3$ for all $x$. So we would have $x^3 + y = \left (x + \sqrt[3]y \right)^3$, which is false.
Endomorphism rings of a $k$-algebra
It has two isoclasses of simple right modules. As you mentioned in the comments, there are those two maximal ideals. Those are the only maximal ideals since the maximal ideals correspond to the maximal ideals of $k\times k$, of which there are just two. So you have answered that part of the question yourself. The section assertion is trivially true because the composition length of the ring as a right module is $3$, and composition length is additive, that is: $\ell(M)=\ell(N)+\ell(M/N)$ (whenever composition length is defined for $M$ and $N$.)
absolute value definition.
$$ \left|-5\right|=-\left(-5\right)=5 $$ The $x$ is negative too, in fact if $x<0$ $$ \left|x\right|=-x $$ and if $x \geq 0$ $$ \left|x\right|=x $$
An infinite set with the cofinite topology is not Hausdorff.
The complement of an open set is finite. Can another open set fit in the complement?
Does the ring in this Fast Fourier Transform image of a hexagonally close packed structure have significance?
The ring appears, because your input is not a perfect Dirac comb, its size should increase if you use a higher resolution image. The mathematical description, using the convolution theorem of the Fourier transform would be the following: Your input is a Dirac comb convoluted with the shape of you dots. For that reason the FFT of your input gets multiplied with the FFT of the shape of your dot. The smaller the dot, the larger the ring. That's the reciprocity in Fourier space. Additionally you have the cross artifacts form the finite size of your input, they can be explained the other way around.
Multivariable calculus: How do I find the Taylor series for a function about a certain point?
Since $f$ is a polynomial it is for each point $(x_0,y_0)$ its own Taylor expansion at $(x_0,y_0)$ in disguise. Given $(x_0,y_0):=(1,-1)$ write $x:=1+\xi$, $y:=-1+\eta$ and obtain $$\eqalign{\hat f(\xi,\eta)&=f(1+\xi,-1+\eta)=(1+\xi)^2+(1+\xi)(-1+\eta)+(-1+\eta)^3\cr &=-1+(\xi+4\eta)+(\xi^2+\xi\eta-3\eta^2)+\eta^3\ .\cr}$$ Of course you can rewrite that in the form $$f(x,y)=-1+\bigl((x-1)+4(y+1)\bigr)+\bigl((x-1)^2+(x-1)(y+1)-3(y-1)^2\bigr)+(y+1)^3\ .$$
Exclusive-NOR (XNOR) and his CNF sub-expression
You can read a CNF off the truth table for XNOR: $C \leftrightarrow \overline{A\oplus B}$ exactly iff $$ A \land B \to C \\ A \land \bar B \to \bar C \\ \bar A \land B \to \bar C \\ \bar A \land \bar B \to C $$ which by unfolding the implications becomes $$ (\bar A \lor \bar B \lor C) \land \\ (\bar A \lor B \lor \bar C) \land \\ (A \lor \bar B \lor \bar C) \land \\ (A \lor B \lor C) $$
How to remember definition of *additive* arithmetic function
Yes, the way to remember it is to know that logarithm is the poster child for an additive function.
if composition $f\circ g$ is a function then both $f,g$ are functions
Hint Try $A,C$ sets with a single element. If $f= A \times B$ and $g=B \times C$, and $B \neq \emptyset$ then $f \circ g$ is a function.
Show transformation on function is linear
Hint: Suppose that $f,g\in X$. You need to show that $P(f+g) = Pf + Pg$ and $P(\alpha f) = \alpha Pf$, for $\alpha\in\mathbb{R}$. Recall that $f+g$ is defined by $(f+g)(x) = f(x)+g(x)$, for $x\in [-1,1]$. Similarly $\alpha f$ is defined by $(\alpha f)(x) = \alpha f(x)$, for all $x\in[-1,1]$.
$f(x,y)=(x+y,x^2+y^2)$ is this function globally $1-1$?
If $f(x,y)=(a,b)$ then $a^2-b=2xy$ and so $x$ and $y$ are the roots of $$t^2-at+\frac{a^2-b}2=0.$$ Thus $x$ is the larger, and $y$ the smaller root.
In a separable Hilbert space, can you write an operator from $\mathcal H$ to $\mathcal H$ as a column-finite matrix?
Begin with some orthonormal basis $(e_n)$. Then form another ONB $\mathcal F$ by the following algorithm: $\mathcal F=()$, empty sequence. Find the smallest $n$ such that $e_n$ is not in the span of $\mathcal F$. Add it to the end of $\mathcal F$ and apply Gram-Schmidt. In the finite sequence $\mathcal F=(f_k)$, find the smallest $k$ such that $Tf_k$ is not in the span of $\mathcal F$. Add $Tf_k$ to the end of $\mathcal F$ and apply Gram-Schmidt. If there is no such $k$, skip this step. Return to step 2. Okay, this is not technically an algorithm because it never terminates. But you can check that it creates an ONB with the desired property. Completeness is assured by step 2, while the finite-columnness comes from step 3: the image of every element of $\mathcal F$ is in the span of finitely many elements.
proof: $a,b,c \in \mathbb{R}, b > a, c > 0 $, $\Rightarrow$ $bc > ac$
In solution $1$: You say that $bc>ac$ implies $bcc^{-1} > acc^{-1}$. Which axiom did you use to prove this? You say that $bc > ac$ implies $b-a>0$, therefore it is true. WARNING What you just commited was a very common logical fallacy. If you know that $A$ implies $B$ and you know that $B$ is true, you CANNOT say that $A$ is true!!! Example: If the sun rotates around the earth, then the suns position in the sky will change over time. The suns position in the sky changes over time Therefore, the sun rotates around the earth. Solution $2$ is better, just explain: What is the reason that $(b-a), c\in P$ implies that $c(b-a)\in P$? Other than that, solution $2$ looks OK.
Prove that $s(z)=\frac{1}{1+e^{-z}}$ is always increasing
$e^z$ is increasing, $e^{-z}$ is decreasing, $e^{-z}+1$ is decreasing and positive, $\dfrac1{e^{-z}+1}$ is increasing. $$z_0<z_1\implies-z_0>-z_1\implies e^{-z_0}>e^{-z_1}\implies e^{-z_0}+1>e^{-z_1}+1>0 \\\implies\frac1{e^{-z_0}+1}<\frac1{e^{-z_1}+1}.$$
Probabilistic inequality with two random variables
I don't think independence is necessary. Since $E[\xi-\eta]>0$, $P(\eta=\infty)=0$. So, for almost every $\omega$, by archimedean property we can find an integer $k$ such that $(k+1)\xi(\omega)\ge \eta(\omega)+y$. Hence $$P\left(\bigcup_{k=1}^{\infty}\{(\xi-\eta)+k\xi\ge y\}\right)=1$$ Since the events in the union are increasing in $k$, we have $$\lim_{k\to \infty}P\{(\xi-\eta)+k\xi\ge y\}=1$$
Quotient norm banach space
The author wrote One can check that the distance is $\frac12$. That doesn't (necessarily) mean it's obvious. Let's see what we can find. For $a = 0$, we obviously have $\lVert e_1 - a\rVert_1 = 1$, so let's try to find something closer. Write $$a = c_1\cdot e_1 - \sum_{k=2}^\infty c_k\cdot e_k,$$ where not all $c_k = 0$. Then to have $a\in A$, we must have $$\sum_{k=2}^\infty \frac{k}{k+1} c_k = \frac12 c_1,$$ and we have $$\begin{align} \lVert e_1 - a\rVert_1 &= \lvert 1-c_1\rvert + \sum_{k=2}^\infty \lvert c_k\rvert\\ &> \lvert 1-c_1\rvert + \sum_{k=2}^\infty \frac{k}{k+1}\lvert c_k\rvert\tag{1}\\ &\geqslant \lvert 1-c_1\rvert + \left\lvert \sum_{k=2}^\infty \frac{k}{k+1}c_k\right\rvert\\ &= \lvert 1-c_1\rvert + \frac12\lvert c_1\rvert. \end{align}$$ Now it is easy to see that the last lower bound is minimised for $c_1 = 1$, with $$\lVert e_1 -a\rVert_1 > \frac12.$$ So we need to see that we can come arbitrarily close to a distance of $\frac12$. Well, for $a_k = e_1 - \dfrac{k+1}{2k}\cdot e_k,\; k \geqslant 2$, we have $a_k \in A$ and $$\lVert e_1 - a_k\rVert_1 = \left\lVert \frac{k+1}{2k}e_k\right\rVert = \frac{k+1}{2k} = \frac12 + \frac{1}{2k} \to \frac12,$$ so indeed $$\inf_{a\in A} \lVert e_1 - a\rVert_1 = \frac12,$$ and by $(1)$, the distance is not attained.
The extension of Fourier transform from $L^1\cap L^2$ to $L^2$
In general, it isn't. Here, "on" doesn't mean what it usually means, that is one space to itself, but rather that it is an $L^2$ isometry from $L^1\cap L^2$ to $L^2$. A good example is the Fourier transform of $\chi_{[-1,1]}$. Its Fourier transform has finite $L^2$ norm, but it is not Lebesgue integrable.
Theorem 0.7. Walters' book of Ergodic Theory
Hint: The proof is rather lengthy, but the idea is simple. Consider the collection of subsets: $$ {\cal C} = \{ S \in {\cal B} \ | \ \forall \epsilon>0, \exists A\in {\cal A} : \mu(S\Delta A)<\epsilon\}.$$ This collection clearly contains ${\cal A}$. Now (the lengthy part) show that ${\cal C}$ is a $\sigma$-algebra, whence contains ${\cal B}$ and you are done.
Crank-Nicolson for coupled PDE's
$\renewcommand{\d}{\vec{d}} \renewcommand{\S}{\vec{S}} \renewcommand{\v}{v} \renewcommand{\r}{\rho}$ I solved the problem by doing the following: starting from \begin{align*} \v_i^{n+1} + r\left(\r_{i+1}^{n+1} - \r_{i-1}^{n+1}\right) &= \v_i^n - r\left(\r_{i+1}^n - \r_{i-1}^n\right) \\ \r_i^{n+1} + r\left(\v_{i+1}^{n+1} - \v_{i-1}^{n+1}\right) &= \r_i^n - r\left(\v_{i+1}^n - \v_{i-1}^n\right) \end{align*} Define a vector $\vec{S}_i^n = (\tilde{v}_i^n , \r_i^n)^{\mathrm T}$. Then we can write these two equations as $$ \S_i^{n+1} + \begin{bmatrix} 0 & r \\ r & 0 \end{bmatrix} \left(\S_{i+1}^{n+1} - \S_{i-1}^{n+1}\right) = \S_i^n - \begin{bmatrix} 0 & r \\ r & 0 \end{bmatrix} \left(\S_{i+1}^n - \S_{i-1}^n\right) $$ call this matrix $A$. Then we write our set of equations as $$ \begin{bmatrix} 1 & A & & & 0 \\ -A & 1 & \ddots & & \\ & \ddots & \ddots & & \\ & & & 1 & A \\ 0 & & & -A & 1 \end{bmatrix} \begin{bmatrix} \S_1^{n+1} \\ \S_2^{n+1} \\ \vdots \\ \S_{N-2}^{n+1} \\ \S_{N-1}^{n+1} \end{bmatrix} = \begin{bmatrix} 1 & -A & & \! & 0 \\ A & 1 & \ddots & \! & \\ & \ddots & \ddots & \! & \\ & & & \! 1 & -A \\ 0 & & & \! A & 1 \\ \end{bmatrix} \begin{bmatrix} \S_1^{n} \\ \S_2^{n} \\ \vdots \\ \S_{N-2}^{n} \\ \S_{N-1}^{n} \\ \end{bmatrix} + \begin{bmatrix} A\left(\S_0^{n} + \S_0^{n+1}\right) \\ 0 \\ \vdots \\ 0 \\ -A\left(\S_N^{n} + \S_N^{n+1}\right) \\ \end{bmatrix} $$
Generating a uniform distribution in the volume of a box
(This is not an answer.) Not sure this is possible at all. In the simple case $x=y=z=1$, one looks for some i.i.d. random variables $U$, $V$ and $W$ such that $UVW$ is uniform on $(1-\epsilon,1+\epsilon)$. Thus, $\sqrt[3]{1-\epsilon}\leqslant U\leqslant\sqrt[3]{1+\epsilon}$ with full probability, and, in terms of generating functions, for every $s$, one should have $$ E(U^s)=\sqrt[3]{\frac{(1+\epsilon)^{s+1}-(1-\epsilon)^{s+1}}{2(s+1)\epsilon}}. $$ Considering $T=U/\sqrt[3]{1+\epsilon}$ and $t=(1-\epsilon)/(1+\epsilon)$, one gets $t$ in $(0,1)$, $\sqrt[3]{t}\leqslant T\leqslant1$ with full probability, and, for every $s$, $$ E(T^s)=\sqrt[3]{\frac{1-t^{s+1}}{(s+1)(1-t)}}. $$
Number theory question to establish a relation
Written before we knew the variables were integers: No. What you know is symmetric in the variables, while what you want is not. The only way you can evaluate it is if you can prove $ |p|=|q|=|r|$. But by inspection, $p=\sqrt 3, q=0,r=0$ and $p=0,q=\sqrt 3, r=0$ both satisfy the original statement. You can evaluate what you want in one case and not the other.
How do you choose the constant when using Markov's Inequality?
To see where e comes from, let's assume instead we pick some arbitrary constant k and use that instead of e. Then ultimately you will arrive at an expression like this one: P(Ui ≥ fi + kN / w) ≤ (1 / k)d Here, Ui is the estimate of the frequency of xi, N is the total number of elements, w is the width of any array, and d is the number of arrays you use. Take any ε and δ in the range (0, 1). Then if you set w = k / ε d = -logk δ You get P(Ui ≥ fi + εN) ≤ δ The question is what choice of k to make that minimizes space usage. The space usage is given by wd = (k / ε) (-logk δ) You can use some elementary calculus to show that this function minimizes when you set k = e. Hope this helps!
Show that $\frac{(2n-1)!}{(n)!(n-1)!}$ is odd or even according as to whether $n$ is or is not a power of $2$.
For any integer $k$ let $v(k)$ be the highest power of $2$ dividing $k$. Then $v\left((2n-1)!\right)=v\left(2\times 4 \times 6 ...\times (2n-2)\right)=n-1+v\left((n-1)!\right)$. Therefore $$v\left(\frac{(2n-1)!}{(n-1)!n!}\right)=\frac {n-1}{v(n!)}.$$ This completes the proof since $v\left(n!\right)$ is $n-1$ when $n$ is a power of $2$ and is less than $n-1$ otherwise. N.B. You already know this result for $n$ a power of $2$. For other values of $n$ the result follows easily by induction. Suppose $n$ is a power of $2$. For $i<n$, $v(n+i)=v(i)$. So, for $k<n$, if $v\left(k!\right)<k-1$ then $$v\left((n+k)!\right)=v\left(n!\right)+v\left(k!\right)<n+k-1.$$
Connectedness of $\{(x,\sin(\frac{1}{x})); x \in ]0,1]\}$
(1) Yes, a function $X\to Y\times Z:f(x)=(g(y),h(z))$ (where $X$, $Y$, $Z$ are metric spaces) is continuous if and only if the coordinate functions is. You can show this directly from the $\varepsilon$-$\delta$ definition: Given an $\varepsilon$ that we want to bound the variation in $f(x)$ about a point, find $\delta_1$ such that $g$ varies by at most $\varepsilon/2$ and $\delta_2$ such that $h$ varies by at most $\varepsilon/2$. Within a distance of $\min(\delta_1,\delta_2)$, the variation in each of the coordinates is at most $\varepsilon/2$, so the variation of all of $f$ cannot exceed $\varepsilon$. (2) You don't need to handle $a=0$ because $0$ is explicitly not in the domain of $f$. There's no requirement that $f(x_n)$ must converge unless the $x_n$ themselves converge to a point in the domain. This is not any different from the fact that $x\mapsto 1/x$ is continuous on $]0,\infty[$.
Two points on a curve that have a common tangent line
Generally, $y=x^3-x^4=ax+b$ have 4 roots for $x$. $$x^3-x^4 = ax+b \Rightarrow (x-x_1)(x-x_2)(x-x_3)(x-x_4) = 0 $$ If $ax+b$ is a tangent line on 2 points of the curve, that means $$x^3-x^4 = ax+b \Rightarrow (x-x_1)^2(x-x_2)^2 = 0 $$ Then we can have $$ x^4 - x^3 +0\cdot x^2 + ax + b = 0 \Rightarrow $$ $$ x^4 - 2(x_1+x_2)x^3 + (x_1^2+x_2^2+4x_1x_2)x^2 - 2x_1x_2(x_1+x_2)x + x_1^2x_2^2 = 0 $$ $2x_1 + 2x_2 = 1$ $x_1^2 + 4x_1x_2 + x_2^2 = 0$ $-2x_1x_2(x_1+x_2) = a$ $x_1^2x_2^2 = b$ not too hard to find $a=\frac{1}{8}$, $b=\frac{1}{64}$
Combinatorics and leaving a choice out
I must admit that I don't know either what they mean with "only one selection matters". I would rather say that two selections matter: J|St|E and J|St|A. There are $\binom53=10$ possible selections and this will be the denominator. There are $\binom21=2$ choices for the "-" in J|St|- and this will be the numerator. That gives: $$P(\text{J|St|E or J|St|A})=\frac2{10}=\frac15$$
Is there a higher chance of winning one contest if you enter many? How can we determine when it's most mathematically favorable then?
Let consider a simple example and then you can extrapolate. Consider two independent lotteries $L_1$ and $L_2$ that both pays $1$ with probability $50\%$ and $-1$ otherwise. We could write it $P(L_1=1)=P(L_1=0)=0.5$ A fair ticket to enter this lottery should cost 0.5. Let's assume you have $1\$$. You can buy two different strategies: buy lottery 1 and lottery 2 and get: $2$ with probability $0.5\times0.5$ = $0.25$ $1$ with probability $0.5$ $0$ with probability $0.25$ buy 2 ticket of lottery 1 get: $2$ with probability $0.5$ $0$ with probability $0.5$ In both cases you will get $1\$$ on average (the price you paid to enter the deal). You wont get any richer. The more independent you diversify your investment in the more it will just spread out/smooth your possible outcomes and their probabilities but keep the average gain the same. In reality, some people would prefer(be willing to pay more) the first strategy and some people would prefer the second strategy. To be honest most people like their loss to be limited and their gain uncapped, potentially very high because of emotional bias. On the other hand company who sells both strategies in large amount (for many lotteries) should be indifferent between the two but adjust the price to make her profit by a trade off between attracting lottery buyers and making profit on the lottery payoff - price.
convexity of two linear spaces connected by a convex equality constraint
No. All equality constraints has to be linear -- this is almost your observation. As an example, take $-1 \le x \le 1$, $-1 \le y \le 1$ and $\exp(y) = x$. Then, the feasible set is a non-convex part of the graph of $\exp$.
Primitive roots for primes (Burton's text book)
One standard definition of primitive root is that $g$ is a primitive root of the prime $p$ if $g$ has order $p-1$ modulo $p$. Thus $p-1$ is the smallest positive integer $j$ such that $g^j\equiv 1\pmod{p-1}$. This definition immediately implies that if $g$ is a primitive root of $p$, then $g^1,g^2,g^3,\dots,g^{p-1}$ are pairwise incongruent modulo $p$. This implies that as $k$ ranges from $1$ to $p-1$, the remainder when $g^k$ is divided by $p$ ranges, in some order, over the numbers from $1$ to $p-1$.
First Course in Linear algebra books that start with basic algebra?
Gilbert Strang's excellent book "Introduction to Linear Algebra" and video lectures are all you need to teach yourself basic linear algebra from first principles. If you can't get the book, the video lectures and other material available at MIT OCW is sufficient, but I would recommend the book.
Shortest rotation for a vector to look into a point
Vector camera to current target = $\;\vec v$ Vector camera to new target = $\;\vec t$ The cross product $\;\vec v \times \vec t= \vec r\;$ gives the axis of rotation. The module of this product $\;\lvert \vec r \rvert = \lvert \vec v \rvert · \lvert \vec t \rvert · sin \theta \;$ gives the angle $\theta\;$ of rotation from $\vec v\;$ to $\vec t\;$. This angle is always $\le 180º$. In 2D, to tell the clockwise or counter-clockwise rotation see this answer
About convergence in probability infinitely often.
Define the event: $$A_n:=\left\{ω\, :\, |X_n(ω)-X(ω)|>ε\right\}$$ Then, by definition $$ω \in A_n \,\,\text{i.o.} \iff ω \in \limsup_{n\to\infty}A_n$$ Using this notation your question can be written as (note the questionmark) $$\lim_{n\to \infty} P(A_n)=0 \overset{?}\implies P(\limsup_{n\to\infty}A_n)=0$$ The answer is no, an example is already given in the comments. The first Borel-Cantelli lemma gives a sufficient (but not necessary) condition for this to hold. However, the following implication is true $$\lim_{n\to \infty} P(A_n)>0 \implies P(\limsup_{n\to\infty}A_n)>0$$ or in other words, if they random variables $X_n$ do not converge in probability they cannot converge in the second mode (necessary condition but not sufficient).
If a function is constant on some domain and differentiable, is it constant on a larger domain?
No. It is possible to "patch" functions together in a differentiable way. For instance, let the domain be $[-1,1]$, $$ f(x)=\begin{cases}0,&\ x<0 \\ x^2,&\ x\geq0 \end{cases} $$ Then $f$ is differentiable (you can check that it is differentiable at $0$ by looking at the two side derivatives), constant in part of the domain, but non-constant in the whole. A variation of this example can achieve the same even if "infinitely differentiable" is required. Only when we get to "analytic", will "constant on an interval" imply "constant everywhere".
Convergent sequence that satisfy $y_{n+1}=3y_n^2-3y_n+1$
$l=\frac{1}{3}$ is surely not a limit of the sequence as the map $f(x)=3x^2-x+1$ is continuous and $l$ is not one of its fixed point.
Some verification of localization of categories
Sorry to bring up this question, but someone told me how to do this, while he didn't really want to answer this question. I'm posting it here, in case this would help someone else. The problem comes from the definition of being equivalent. Gelfand-Manin doesn't really give it correctly - well, at least not clear. In diagram III.10, $s,t$ and $sr$ are in $U$ while $r$ might not be. The roof the book refers should be the large roof. And by this definition, all proofs provided by the textbook make sense.
Comparing integral resolutions using Wolfram Alpha / Mathematica
If you are at the point of computing integrals, then you are probably already comfortable with computing derivatives. The way to check whether a potential antiderivative is correct is to take its derivative. If you take the derivative of $f(x)=\ln(\sqrt x)$ (either by first simplifying to $\frac{1}{2}\ln(x)$ or using the chain rule) you get $f'(x)=\frac{1}{2x}$. If you take the derivative of $g(x)=-\frac{2}{\sqrt x}=-2x^{-1/2}$, you get $g'(x)=x^{-3/2}=\frac{1}{\sqrt{x}^3}=\frac{1}{x\sqrt{x}}$.
Termination of The Ford-Fulkerson Algorithm
You can't: if initial flow is irrational and all initial capacities are rational, then the algorithm doesn't have to terminate. Take standard example, replace irrational capacity $r$ with some larger rational capacity $q$, and assume initial irrational flow goes through $s \to v_4 \to v_3 \to t$ with capacity $q - r$.
If $R$ is the circumradius of $\triangle ABC$, and $\cos A=\frac1{2R}$, $\cos B=\frac1{R}$ and $\cos C= \frac3{2R}$, then is it unique and its area?
By law of sines, we have $$\sin A=\frac{a}{2R}\Rightarrow 1-\left(\frac{1}{2R}\right)^2=\frac{a^2}{4R^2}\Rightarrow a=\sqrt{4R^2-1}.$$ Also, we have $$b=\sqrt{4R^2-4},\ \ \ \ c=\sqrt{4R^2-9}.$$ Then, the area $S$ is $$S=\frac 12bc\sin A=\frac{\sqrt{(4R^2-1)(4R^2-4)(4R^2-9)}}{4R}.$$
Pole and residue of trigonometric function
$$\frac{\sin z^2}{z^3} = \frac{1}{z^3} \left (z^2 - \frac{z^6}{3!} + \frac{z^{10}}{5!} + \dots \right ) = \frac{1}{z} + \frac{z^3}{3!} + \frac{z^{7}}{5!} + \dots$$ Look at the coefficient on $\frac{a_{-1}}{z}$, it is $1$. Therefore $\text{Res}\left ( \frac{\sin z^2}{z^3}, 0\right) = 1$. This is is a simple pole.
Let $G$ be a group with order $p$ a prime number. Show $G$ is cyclic.
The definition of a cyclic group is a group such that all the elements of the group generated by one element. One can also show that if $G$ is cyclic of order $n$ then it is isomorphic to $\mathbb{Z}/n\mathbb{Z}$ (I'm sure you can find this fact in your book). Hence, you probably want to find an element that has the order of the group to finish your problem. Here is a hint: Let $x$ be an element of $G$ that is not the identity. What does Lagrange theorem tell you? That the order of $x$ divides the order of $G$, which is a prime. Which are the divisors of a prime?
Where has this unusually imaginary number been sighted?
$\newcommand{\Dmd}{\diamond}\newcommand{\Reals}{\mathbf{R}}$Perhaps you know this already, but $(\Reals, +)$ is isomorphic to $\bigl((-1, 1), \Dmd\bigr)$ via $\tanh$, i.e., $$ \tanh(\alpha + \beta) = \frac{\tanh \alpha + \tanh \beta}{1 + \tanh \alpha \tanh \beta} = (\tanh \alpha) \Dmd (\tanh \beta); $$ indeed, this equation is essentially the addition law for velocities, expressed as multiples of $c$, in special relativity. Further, $$ a = \tanh \alpha\quad \text{if and only if}\quad e^{-2\alpha} = \frac{1 - a}{1 + a} \quad \text{if and only if}\quad \alpha = \frac{1}{2} \log \frac{1 + a}{1 - a}, $$ which appears related to your observation that $\Dmd$ is conjugate to multiplication. Added: Writing $$ a = \tanh \alpha = \frac{\sinh\alpha}{\cosh\alpha} \quad\text{and}\quad b = \tanh \beta = \frac{\sinh\beta}{\cosh\beta}, $$ identity (1) becomes \begin{align*} \frac{1}{a} \Dmd b &= \coth\alpha \Dmd \tanh\beta \\ &= \frac{\coth\alpha + \tanh\beta}{1 + \coth\alpha \tanh\beta} \\ &= \frac{\dfrac{\cosh\alpha}{\sinh\alpha} + \dfrac{\sinh\beta}{\cosh\beta}} {1 + \dfrac{\cosh\alpha}{\sinh\alpha} \dfrac{\sinh\beta}{\cosh\beta}} \\ &= \frac{\cosh\alpha \cosh\beta + \sinh\alpha \sinh\beta} {\sinh\alpha \cosh\beta + \cosh\alpha \sinh\beta} \\ &= \frac{\cosh (\alpha + \beta)}{\sinh (\alpha + \beta)} \\ &= \frac{1}{a \Dmd b}, \end{align*} and similarly (or by commutativity) for $a \Dmd (1/b)$. My initial thoughts were that $\Dmd$ has surely been noticed in the past (as Bill Dubuque's links seem to conform), and perhaps the preceding computation placed the fact under the umbrella of "hyperbolic identities". :) I'm not a mathematical historian by any means, but in terms of citable references, (1) seems not too far distant from a basic identity such as the factorization of a difference of squares, in that (at risk of stating the obvious) $$ \frac{1}{a} \Dmd b = \frac{(1/a) + b}{1 + (b/a)} = \frac{1 + ab}{a + b} = \frac{1}{a \Dmd b}. $$ The fact that $\infty \Dmd b = 1/b$ for all (finite) $b$ may be interpreted as a consequence of $$ a \Dmd b = \frac{a + b}{1 + ab} = \frac{1 + (b/a)}{(1/a) + b} $$ together with an ordinary limit as $a \to \infty$. :) Here are some geometric thoughts (organized, I hope, but not very well-culled) that may be germane to the spirit of the question. Consider the maps $$ \phi(a) = \frac{1 + a}{1 - a},\qquad \rho(a) = \frac{1}{a},\qquad \mu(a) = -a, $$ on $\Reals \cup\{\infty\}$, i.e., the "Riemann circle". Geometrically, $\phi$ is a quarter turn counterclockwise, so $\phi \circ \phi$ is the half-turn of the Riemann circle, a.k.a. the map $\rho \circ \mu = \mu \circ \rho$ sending $a$ to $-1/a$. The maps $\phi$ and $\rho$ (or $\phi$ and $\mu$) generate a dihedral group of order eight, in which, for example, $$ \phi \circ \mu = \rho \circ \phi,\qquad \phi \circ \rho = \mu \circ \phi. $$ Loosely, $\phi$ exchanges reciprocation with negation. Consider also the maps $\psi:(0, \infty) \to \Reals$ defined by $\psi(a) = \tfrac{1}{2} \log a$; and $\tanh:\Reals \to (-1, 1)$. Note that $\tanh^{-1} = \psi \circ \phi$, and the maps $$ \bigl((0, \infty), \cdot\bigr) \stackrel{\psi}{\longrightarrow} (\Reals, +) \stackrel{\tanh}{\longrightarrow} \bigl((-1, 1), \Dmd\bigr) $$ are isomorphisms of Abelian groups. The backward composition $$ \phi = (\tanh \circ \psi)^{-1} = \psi^{-1} \circ \tanh^{-1} $$ maps $(-1, 1)$ bijectively to $(0, \infty)$, and extends to $[-1, 1]$ by continuity (in the Riemann circle). In fact, $\phi$ is an isomorphism (as a composition of isomorphisms), but the "morphism condition" may be pleasantly checked directly: \begin{align*} \phi(a \Dmd b) &= \frac{1 + \dfrac{a + b}{1 + ab}}{1 - \dfrac{a + b}{1 + ab}} \\ &= \frac{1 + ab + a + b}{1 + ab - a - b} \\ &= \frac{(1 + a)(1 + b)}{(1 - a)(1 - b)} \\ &= \phi(a) \cdot \phi(b). \end{align*} Since $\phi(1/a) = -\phi(a)$, the condition "$R \Dmd a = 1/a$ for all (finite) $a$" maps under $\phi$ to $$ \phi(R) \cdot \phi(a) = -\phi(a)\quad\text{for all (finite) $a$,} \quad\text{or}\quad \phi(R) = -1, $$ recovering $R = \phi^{-1}(-1) = \infty$. Again, the key fact lurking behind these observations appears to be the existence of three mutually-isomorphic groups embedded in the Riemann circle, and the fact that $\Dmd$ is "addition in disguise".
Subgroup of a topological group
It's not true that $K=\bigcup_{g\in G}gK$. In fact, this is equal to $G$. Assuming $K^C$ is the complement of $K$, it is true that $K^C=\bigcup_{g\in G-K}gK$. This is a simple algebraic fact: the cosets $gK$ for $g\notin K$ are all disjoint from $K$, and every element that is not in $K$ is in such a coset. This is true for any group, not just topological groups. To answer the edited question, it is indeed true that $K=\bigcup_{g\in K}gK$. In fact, $gK=K$ for all $g\in K$, so every term in the union is the same set.
Borel Probability Measure on a compact metric space
Let $x_n \in K_n$ for all $n$ where $K_n$ is the support of $\mu_n$. There is a subsequence $x_{n_i}$ converging to some point $x$. Let $f$ be a bounded continuous function on $X$. Consider $\int_{K_{n_i}} (f(y)-f(x_{n_i}))d\mu_{n_i}$. By uniform continuity of $f$ it follows that $|f(y)-f(x)| <\epsilon$ for $i$ sufficiently large and hence $\int_{K_{n_i}} (f(y)-f(x_{n_i}))d\mu_{n_i} \to 0$. Now$\int (f(y)-f(x))d\mu_{n_i}=\int_{K_{n_i}} (f(y)-f(x_{n_i}))d\mu_{n_i}+f(x_{n_i})-f(x) \to 0$. Hence $\int_{K_{n_i}} f(y)d\mu_{n_i} \to f(x)$. This proves that the limiting measure is $\delta_x$.
Why are all finite subgroups of the group $S^1$ of complex roots of unity cyclic?
Let $G$ be a (non-trivial) finite subgroup of $S^1$. Identifying $S^1$ with $\mathbb{R}/\mathbb{Z}$ (and choosing $[0,1)$ as the set of representatives), let $x$ be the minimal non-zero element of $G$. If $G\neq \langle x\rangle$ then there is some $y\in G$ with $y\not\in\langle x\rangle$. Hence there is some natural number $n$ such that $nx<y<(n+1)x$, or $0<y-nx<x$. This contradicts the minimality of $x$, hence $G=\langle x\rangle$.
Law of total expectation derivation
Yes and yes, although for clarity $|Y$ should read $|Y=y$.
Suppose that $f : X \to Y$ is a one-to-one function and $A, B\subset X$ with $f(A) = f(B)$. Then $A = B$.
If $a\in A$ then $f(a)\in f(A)=f(B)$. so $f(a)=f(b)$ for some $b\in B$. now "$f$ is one-to-one" implies $a=b$; so $a\in B$. we proved $A\subset B$. the converse is similar.
Describe the long-term behavior of $\frac{dy}{dx}=-3y+b(t)+7$ if $b(t)$ decreases to $0$ as $t\rightarrow\infty$
You'll verify that the solution of the differential equation $$E \equiv \frac{dy}{dx}=-3y+b(t)+7$$ taking the value $y_0$ at $0$ is $$y(t) = \left(y_0 + \int_0^t [b(u)+7]e^{3u} \ du \right) e^{-3t}$$ and $$\begin{aligned} y(t) - 7/3 & = \left(y_0 + \int_0^t [b(u)+7]e^{3u} \ du \right) e^{-3t} - 7/3\\ &=\left(y_0 + 7/3 \right)e^{-3t} + \left(\int_0^t b(u)e^{3u} \ du \right) e^{-3t} \end{aligned}$$ Now you should be able to prove that the RHS of the equality above converges to $0$ as $\lim\limits_{t \to \infty} b(t) = 0$. For that notice that if $\vert b(t) \vert < \epsilon$ for $t >T$ you have $$\left\vert \left(\int_t^T b(u)e^{3u} \ du \right) e^{-3t} \right\vert \le \epsilon \left(\int_0^t e^{3u} \ du \right) e^{-3t} \le \frac{\epsilon}{3} $$ Hence $\lim\limits_{t \to \infty} = 7/3$.
Math book for engineer from the functions to the integrals
Gravitation by Misner, Thorne & Wheeler (http://www.amazon.com/Gravitation-Physics-Charles-W-Misner/dp/0716703440/). As a side benefit, you'll also learn enough about the physics and math of gravitation to earn a PhD. Actually, when I was in math grad school, a fellow (admittedly brilliant) student claimed this book was how he learned calculus. This would help with the intuition side. Or grab a syllabus (or one each for each course in a precalculus & calculus series) and use web resources such as MIT open courseware (http://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/syllabus/) or wikipedia. Chances are if Calculus is your weak point, you need to review the important topics in precalculus such as rules of algebra, methods/tricks of factoring, and the standard functions such as logarithm, exponential function, trigonometric functions and graphing. Which books have you tried so far, and what have you disliked about them?
Why commutative law, associative law, distributive law ... are considered to be axioms in propositional logic?
The answer to your question is a bit complicated ... part of it is because we can think about what would make something an 'axiom' in different ways: First of all, yes, we can prove these laws using the truth-tables ... which really means: we can show that these laws hold on the basis of more fundamental definitions. Typically (but as Mauro says, not always), these more fundamental definitions state that: Every atomic claim is either true or false (but not both) (or: if you want to go into more abstract binary algebra: every variable takes on exactly one of two values) $\neg \varphi$ is true iff $\varphi$ is false $\varphi \land \psi$ is true iff $\varphi$ and $\psi$ are true. etc. etc. (in other words, these are simply the more formal definitions of what you do in a truth-table) So yes, from these (i.e. using truth-tables) we can prove all the laws you mention. So, in that sense, laws like commutation, association, etc. typically aren't really axioms, as we can infer them from more basic principles. On the other hand, sometimes we start out not with the kind of 'formal semantical' definitions as laid out above, but we simply start with a bunch of syntactically defined sentences, and say "these are my axioms, and here are some inference rules that allow you to infer further sentences from that". The Hilbert system is one example of that: in this system we have as one of the axioms $P \rightarrow (Q \rightarrow P)$. And yes, I could of course infer that that statement follows from the semantical definitions from above (e.g. I could show in a truth-table that that statement is always true), but in the context of such 'axiomatic proof systems', this statement is really seen as an axiom ... no deeper semantics is provided. Now, to make things even more confusing: There are various kinds of axiomatic systems. The Hilbert system actually does not use Commutation, Association, etc. as its axioms. But: you could define an axiomatic system (and there probably are some) where these laws really are its axioms!
Show that a polynomial is irreducible in $\mathbb{Q}$
In $\mathbb{F}_p[x]$, we have $\prod_{i=0}^{p-1} \left ( x-i \right ) = x^p - x$. Now it is known that if $p \nmid a$ ,then $x^p - x + a$ is irreducible over $\mathbb{F}_p[x]$. Proof: Let $\alpha \not\in \mathbb{F}_p$ be a root of $x^p - x + a$ over $\mathbb{F}_p$. Then all the elements $\alpha, \alpha + 1 ,\ldots, \alpha + p-1$ are roots of $x^p - x + a$ over $\mathbb{F}_p$.So $\mathbb{F}_p (\alpha)$ is the splitting field of $x^p -x + a$ over $\mathbb{F}_p$.Now its easy to see that the degree of the minimal polynomial of $\alpha$ over $\mathbb{F}_p$ divides $p$.(*) Since $p$ is a prime, degree of the minimal polynomial of $\alpha$ over $\mathbb{F}_p$ is $p$. (It can't be $1$ since $\alpha \not\in \mathbb{F}_p$.) This proves that the minimal polynomial is actually $x^p - x + a$. So it must be irreducible and we are done. An elementary proof of it for the case $p=5$, can be found in http://www.artofproblemsolving.com/Forum/viewtopic.php?p=2450995#p2450995. Edit: (*) The degree of the minimal polynomial $g_0(x)$ of $\alpha$ over $\mathbb{F}_p[x] $ is equal to $[\mathbb{F_p}(\alpha) : \mathbb{F_p}] = n$ (let). Since $\mathbb{F_p}(\alpha) = \mathbb{F_p}(\alpha + t)$, minimal polynomial $g_k(x)$ of $\alpha + t$ also has degree $n$. Note that $g_k(x) | x^p - x + a$.So roots of $g_k(x) \in \left \{ \alpha, \ldots, \alpha + p-1 \right \} $. Also note that from the uniqueness of minimal polynomial $g_r(x)$ and $g_s(x)$ has no common root for $r \ne s$. So roots of $g_k(x)$ partitions $\left \{ \alpha, \ldots, \alpha + p-1 \right \} $ with $n$ elements in each class. So $n | p$.
Antiderivative of $e^x/(1+2e^x)$
These answers are the same. Namely, $$ \ln(2e^x + 1) = \ln(2(e^x + 0.5)) = \ln(2) + \ln(e^x + 0.5).$$ The added $\ln 2$ is absorbed by the $+ C$, and so we see that the two answers are the same up to an additive constant. There is no error.
class number of pure cubic fields and elliptic curves
You could try SAGE: sage: K.<x> = NumberField(x^3 - 6321363052) sage: K.class_number() sage: E = EllipticCurve(QQ,[0,-6321363052]) sage: E.gens() EDIT: One should expect the computations to take a while. These are not easy computations to carry out and patience is key.
Problem #11 in Royden-Fitzpatrick $4^{th}$ edition.
Okay we know what for a simple function $\phi (x) = \sum_{i=1}^{n} a_i\chi_{ E_i} $ defined such that $\bigcup E_i \subset [\alpha+\gamma , \beta+\gamma]$, we have $$\int_{[\alpha+\gamma,\beta+\gamma]} \phi = \sum_{i=1}^{n} a_i m(E_i)$$. Now $$\phi(t+\gamma) = \left\{ \begin{array}{cc} a_1 &, t+\gamma \in E_1 \\ a_2 &, t+\gamma \in E_2 \\ \vdots & \end{array}\right.$$ Now we can define $E'_i = E_i - \gamma $ then $\bigcup E'_i \subset [\alpha , \beta] $. $$\phi(t+\gamma) =\phi' (t) = \left\{ \begin{array}{cc} a_1 &, t \in E'_1 \\ a_2 &, t \in E'_2 \\ \vdots & \end{array}\right.$$. Thus finally $$\int_{[\alpha , \beta] }\phi(t+\gamma) =\int_{[\alpha , \beta ]} \phi' (t) = \sum a_i m(E'_i) = \sum a_i m(E_i ) = \int_{[\alpha + \gamma, \beta+\gamma]} \phi(x) dx $$. Now we know that for bounded function $g$ defined on $E= [\alpha+\gamma ,\beta+\gamma ]$ we define the upper integral of $g$ as $$\inf \left\{ \int_E \psi : \psi \text{ is simple and } \psi \geq g \right\}$$ and the lower integral as $$\sup \left\{ \int_E \phi : \phi \text{ is simple and } \phi \leq g \right\} $$ And $g$ is integrable if both values are equal. and we know from the simple approximation lemma that for a bounded function $g$ on $[\alpha+\gamma , \beta+\gamma ]$ there exists two simple functions $\phi \leq g \leq \psi $ for any given $\epsilon$ such that $\psi - \phi < \epsilon $. Hence given $\epsilon > 0$ there exists $\phi , \psi $ simple functions such that $\phi \leq g \leq \psi $ and $\psi - \phi < \frac{\epsilon}{\beta-\alpha}$ then we have $g \leq \psi $ on $[\alpha + \gamma , \beta + \gamma] $ , then $g(t+\gamma) \leq \psi (t+ \gamma)=\psi'(t) $ on $[\alpha , \beta]$. $$\left|\int_{[\alpha , \beta ] } g(t+\gamma) - \int_{[\alpha + \gamma, \beta + \gamma ]} g(x) \right| \leq \left| \int_{[\alpha , \beta ]} \psi' - \int_{[\alpha + \gamma , \beta + \gamma ]} \phi \right|\leq \int_{[\beta,\alpha]} | \psi - \phi | < \epsilon $$
Metric tensor derivative identity
I found out that this is from the matrix identity, $$\partial X^{-1} = - X^{-1} (\partial X) X^{-1}$$ Proof for this identity is in here. With this, we have $$X (\partial X^{-1}) X = - \partial X$$ By substituting $X$ with $g^{\mu \lambda}$, we get the identity!
Uniqueness for the wave equation on an interval
It's not clear why you decided to perform these manipulations, but it seems you are considering the energy method. The energy for this equation is $$E(t)=\int_0^l (v_t^2+a^2v_x^2)\,dx$$ which is basically the sum of kinetic and potential energy. Differentiate with respect to time, apply the PDE, and integrate one of the terms by parts: you will get zero. This shows the energy is constant. Since it was zero initially, it stays equal to zero. This implies the function $v$ is constant, as desired.
Show that $(1+\frac 1n)^{n^2} \mathrm e^{-n}$ is not a zero sequence without logarithms
We will first argue that $z_n \ge e^{-1/2}$, by showing that that for $x \in [0,1], e^{x - x^2/2} \le 1 + x$. TO do this, let $$ f(x) := e^{x - x^2/2} - 1 - x.$$ Note that, $$ f'(x) = (1-x) e^{x -x^2/2} - 1, \\ f''(x) = ((1-x)^2 -1)e^{x- x^2/2}$$ Note that the second derivative is non-positive for $x \in [0,1]$. Thus, the derivative is nonincreasing in ths domain. Since $f'(0) = 0,$ the derivative is non-positive in $[0,1]$ - i.e., $f$ is nonincreasing on $[0,1]$. Finally, since $f(0) = 0,$ this tells us that for $x \in [0,1], f(x) \le 0 \iff e^{x- x^2/2} \le 1 + x$. Using this for $x = 1/n,$ we find that $ e^{1/n - 1/2n^2} \le 1 + 1/n,$ for $n \ge 1,$ and thus $$(1 + 1/n)^{n^2} e^{-n} \ge (e^{1/n - 1/2n^2})^{n^2} e^{-n} = e^{-1/2}.$$ We will now get an upper bound on the $z_n$. By truncating the series, we get that $e^{u} \ge 1 + u + u^2/2$ for $u \ge 0$. Now, let $x_n = \sqrt{1+2/n} - 1$. Note that $x_n + x_n^2/2 = 1/n$. So, we find that $(1+1/n) \le e^{x_n},$ and thus $$ (1 + 1/n)^{n^2} e^{-n} \le e^{n^2 x_n - n}. $$ Since $\exp(\cdot)$ is continuous, if we can argue that $ n^2 x_n - n \to -1/2,$ then we'd be done by the sandwich theorem. We can show this via a Taylor expansion: $\sqrt{1 + u} = 1 + u/2 - u^2/8 + O(u^3).$ Thus, $$ n^2 x_n - n = n^2 \left( 1 + \frac{(2/n)}{2} - \frac{(4/n^2)}{8} + O(n^{-3}) -1\right) - n = - \frac{1}{2} + O(n^{-1}),$$ and we're done.
Calculate number of zeros using the Argument Principle
$f$ is real and positive on $[0,+\infty)$, hence there is no change of argument on that part of the boundary of the quarter-disk. For large enough $R$, $f$ behaves essentially like $z \mapsto z^5$ on the circle $\lvert z\rvert = R$, so on the quarter-circle $Re^{it}$, $t \in \bigl[0,\frac{\pi}{2}\bigr]$, the argument changes by $\frac{5\pi}{2} + O(R^{-1})$. On the imaginary axis, we have $$f(it) = (5 + t^2) + i(t^5 - 2 t^3 + 2t),$$ so everything stays in the right half-plane, and the argument changes from a little less than $\frac{\pi}{2}$ at $iR$ (since $\arctan \frac{R^5 - 2R^3 + 2R}{R^2 + 5} = \frac{\pi}{2} - O(R^{-3})$) to $0 = \arg 5$ at $0$. Since the total change of argument along the boundary must be a multiple of $2\pi$, it follows that the total change of argument here is exactly $2\pi$, hence there is precisely one zero in the quadrant $\operatorname{Re} z > 0,\, \operatorname{Im} z > 0$.
Discrete random variable with probability generating function problem. Help!
Hints: $G_X(\theta)$ is a generating function, so expand it and look at the coefficients of powers of $\theta$ to find the probabilities of $X$ taking small values, and so of $Y$ taking the squares of these values $G'_X( 1^- ) = E[X]$ and $G''_X( 1^- ) = E[X(X-1)]$
Mathematical induction and Stirling numbers
Suppose we seek to evaluate $$\sum_{q=1}^m {m\choose q}\times q!\times {n\brace q}.$$ We can include $q=0$ here because the Stirling number is zero then. Now the species of set partitions is given by $$\mathfrak{P}(\mathcal{U}\mathfrak{P}_{\ge 1}(\mathcal{Z}))$$ which gives the generating function $$\exp(u(\exp(z)-1))$$ and hence $${n\brace q} = n! [z^n] \frac{(\exp(z)-1)^q}{q!}.$$ Substitute this into the sum to get $$n! [z^n] \sum_{q=0}^m {m\choose q} \times q!\times \frac{(\exp(z)-1)^q}{q!} \\= n! [z^n] \sum_{q=0}^m {m\choose q} (\exp(z)-1)^q = n! [z^n] \exp(mz) = m^n.$$
Can we say that $x^4$ is concave upwards for all $x\in\mathbb{R}$
Given function :$f(x) = x^4 \implies f’(x) = 4x^3 $ It can be seen that derivative of the function is monotonically non-decreasing then on following properties of concave up or convex function, it can be concluded that it is all concave up function.