title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Harmonic number identity
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ $\ds{\sum_{j\ =\ 1}^{n - k}{\pars{-1}^{j + 1} \over j}\,{n \choose j + k} ={n \choose k}\pars{H_{n} - H_{k}}:\ {\large ?}}$. With an integral representation for $\ds{\mu \choose k}$ the evaluation is straightforward. With $\ds{0\ <\ a\ <\ 1}$: \begin{align}&\color{#66f}{\large% \sum_{j\ =\ 1}^{n - k}{\pars{-1}^{j + 1} \over j}\,{n \choose j + k}} =-\sum_{j\ =\ 1}^{\infty}{\pars{-1}^{j} \over j}\,{n \choose n - j - k} \\[5mm]&=-\sum_{j\ =\ 1}^{\infty}{\pars{-1}^{j} \over j} \oint_{\verts{z}\ =\ a}{\pars{1 + z}^{n} \over z^{n - j - k + 1}}\,{\dd z \over 2\pi\ic} =-\oint_{\verts{z}\ =\ a}{\pars{1 + z}^{n} \over z^{n - k + 1}} \sum_{j\ =\ 1}^{\infty}{\pars{-z}^{j} \over j}\,{\dd z \over 2\pi\ic} \\[5mm]&=\oint_{\verts{z}\ =\ a} {\pars{1 + z}^{n}\ln\pars{1 + z} \over z^{n - k + 1}}\,{\dd z \over 2\pi\ic} =\lim_{\mu\ \to\ 0}\partiald{}{\mu}\oint_{\verts{z}\ =\ a} {\pars{1 + z}^{\mu + n} \over z^{n - k + 1}}\,{\dd z \over 2\pi\ic} \\[5mm]&=\lim_{\mu\ \to\ 0}\partiald{}{\mu}{\mu + n \choose n - k} ={1 \over \pars{n - k}!}\lim_{\mu\ \to\ 0} \partiald{}{\mu}\bracks{\Gamma\pars{\mu + n + 1} \over \Gamma\pars{\mu + k + 1}} \\[5mm]&={1 \over \pars{n - k}!}\braces{% -\,{n! \over k!}\bracks{\Psi\pars{k + 1} - \Psi\pars{n + 1}}} \\[5mm]&={n! \over k!\,\pars{n - k}!}\braces{% \bracks{\Psi\pars{n + 1} + \gamma} - \bracks{\Psi\pars{k + 1} + \gamma}} =\color{#66f}{\large{n \choose k}\pars{H_{n} - H_{k}}} \end{align}
Closed form solution for logarithmic inequality
The logic of replacing the square root with an absolute value is the formula $$ \sqrt {x^2} = |x|$$ which is correct. The problem is that there is a mistake in the solution because $ |\sqrt{x-1} -1|$ is substituted for $ \sqrt{x-\sqrt{x-1}}$ which is not correct. Note that $$|\sqrt{x-1} -1| =\sqrt {(\sqrt{x-1} -1)^2}= \sqrt{x-2\sqrt{x-1}}$$ Similarly $$|\sqrt{x-1} +1| =\sqrt {(\sqrt{x-1} +1)^2}= \sqrt{x+2\sqrt{x-1}}$$
Logic behind combinations with repetition?
Stars and Bars says that the number of ways to put $5$ indistinguishable balls into $2$ distinguishable boxes is $\binom{5+2-1}{2-1}$, which is $6$.
How to set up Zeno's paradox as a summation
Per OP's request: First see my comment. I derived the expression in my comment simply by noting that for $|x| < 1, ~\sum_{t=0}^{\infty} x^t = \frac{1}{1 - x}.$ Since I was looking for a value of $x$ so that $\frac{1}{1-x} = 2$, I set $x = \frac{1}{2},$ and then noted that $\left(\frac{1}{2}\right)^t = 2^{(-t)}.$
What's the volume of $x^2+xy+y^2+u^2+uv+v^2=1$
This sort of thing is easiest to do by using as little calculus as possible. First, we rewrite the left-hand side to be a sum of squares: $$ 1 = (x-y/2)^2 +\frac{3}{4} y^2 + (u-v/2)^2 + \frac{3}{4} v^2 $$ The volume is $$ \int_{ (x-y/2)^2 +\frac{3}{4} y^2 + (u-v/2)^2 + \frac{3}{4} v^2 \leq 1 } 1 \, dx \, dy \, du \, dv . $$ To change this into a volume we know, we let $X = x-y/2$, $Y = (\sqrt{3}/2) y$, $U = u-v/2$, $V = (\sqrt{3}/2) v$. Then the Jacobian is $$ \det{\frac{\partial(X,Y,U,V)}{\partial(x,y,u,v)}} \det{\begin{pmatrix} 1 & -1/2 & 0 & 0 \\ 0 & \sqrt{3}/2 & 0 & 0 \\ 0 & 0 & 1 & -1/2 \\ 0 & 0 & 0 & \sqrt{3}/2 \end{pmatrix}} = 3/4 , $$ and the integral becomes $$ \int_{X^2+Y^2 + U^2 + V^2 \leq 1 } \frac{4}{3} \, dX \, dY \, dU \, dV . $$ All we have to do now is find the volume of the unit sphere in $4$ dimensions. One can use a generalisation of polar coordinates, namely $$ \begin{align} X &= r\sin{\theta} \sin{\phi} \sin{\psi} \\ Y &= r\sin{\theta} \sin{\phi} \cos{\psi} \\ U &= r\sin{\theta} \cos{\phi} \\ V &= r\cos{\theta} , \end{align} $$ with $0<r<1$, $0<\theta<\pi$, $0<\phi<\pi$ and $0<\psi<2\pi$. A tedious computation reveals that the Jacobian is $r^3 \sin{\phi} \sin^3{\theta}$, and so the volume of the unit sphere is $$ \left( \int_0^1 r^3 \, dr \right) \left( \int_0^{\pi} \sin{\phi} d\phi \right) \left( \int_0^{\pi} \sin^2{\theta} d\theta \right) \left( \int_0^{2\pi} d\psi \right) . $$ All of these integrals are straightforward, and we find this volume is $\pi^2/2$, and hence the required volume is indeed $2\pi^2/3$. There are easier ways to find the volume of a sphere, but they tend to use the Gamma-function.
Deriving coefficient q in reduced quadratic equation using Vieta's formulas
Because by Vieta's formulas $$x_1+x_2=-\frac 11=-1.$$ Note that in general, if $$ax^2+bx+c=0\ \ \ (a\not=0)$$ has roots $\alpha,\beta$, then the followings hold : $$\alpha+\beta=-\frac ba,\ \ \ \alpha\beta=\frac ca.$$
Solving a Diophantine Equation of the form $N(N-1) = 2X(X-1)$ for $N, X > 0$
You can rewrite this equation as $$(2N-1)^2=2(2X-1)^2-1.$$ The Diophantine equation $u^2-2v^2=-1$ is well know.[*] All the solutions to this equation have $u,v$ odd and are of the form $u+v\sqrt{2}=(1+\sqrt{2})^{2k+1}$. For example, $k=7$ gives: $$(1+\sqrt{2})^{15} = 275807+195025 \sqrt{2}$$ So $N=137904$ and $X=97513$. To get $N>10^{12}$, you need $k\geq 16$. I found that value by trial and error. The smallest is: $N=1070379110497$ and $X=756872327473$. [*] This is called the Negative Pell Equation for $2$.
How to solve conditional probability for 3 events when one event is not indepedent
$$P(X \text{ and } Y \mid C) = \dfrac{P(X \text{ and } Y \text{ and } C)}{P(C)} $$ so long as $P(C)>0$. This is what I would call the traditional conditioning rule If $C$ was independent of $X$ and $Y$, then $P(X \text{ and } Y \mid C) = P(X \text{ and } Y)$, but then the conditional probability would not be particularly interesting If $C = X \cup Y$ then since $X \cap Y \subset X \cup Y$ you have $P(X \text{ and } Y \mid C) = \dfrac{P(X \text{ and } Y )}{P( X \text{ or } Y)} $
Function with Taylor series of order $k,$ which is not twice differentiable
There is no such thing. If $f$ has derivatives up to order $n\geq0$ at $0$ then the $n^{\rm th}$ order Taylor polynomial of $f$ at $0$ is given by $$j^n_0f\,(x):=\sum_{k=0}^n{f^{(k)}(0)\over k!}\>x^k\ ,$$ period. One then can deliberate how well this polynomial approximates $f$ in a neighborhood of $x=0$. If $f''(0)$ does not exist then there is no second order Taylor polynomial of $f$ at $0$.
Solving differential equation $m\frac{\partial v}{\partial t}=-b\left( \frac{\partial x}{\partial t} \right)^{2}$
since $v=\frac{dx}{dt}$ we can re-write the equations as $$ m\frac{d^{2}x}{dt^{2}} = -b\left(\frac{dx}{dt}\right)^{2} $$ as you already showed. $$ mv\frac{dv}{dx} = -bv^{2} $$ which results in $$ \int \frac{1}{v}dv = \mathrm{ln}v = -\frac{b}{m}x + \lambda $$ therefore the solution for v is $$ v = v_{0}\mathrm{e}^{-\frac{bx}{m}} $$ now you can get $x(t)$ by integrating $$ \frac{dx}{dt} = v_{0}\mathrm{e}^{-\frac{bx}{m}} $$ where $v_{0} = \mathrm{e}^{\lambda}$, and is determined by initial conditions. $$ \int \mathrm{e}^{\frac{bx}{m}}dx = \frac{m}{b}\mathrm{e}^{\frac{bx}{m}} = v_{0}t + \lambda_{1} $$ where $\lambda_{1}$ is another integration constant, determined by initial conditions. This yields $$ x(t) = \frac{m}{b}\mathrm{ln}\left(\frac{b}{m}v_{0}t + \lambda_{1}\right) $$
Picking 4 balls with replacement.
You are choosing with replacement, so you should just think in terms of drawing with $\frac 16$ chance of getting white, $\frac 13$ black, $\frac 12$ red. The combinations assume you are drawing without replacement. You have $\frac 56$ chance not to get white on each draw, so the chance you don't get a white one is ???
$D=\{(a,b,c,d,e)\in\mathbb{R}^5 \mid ax^4+bx^3+cx^2+dx+e=0\}$ Prove that ${(1,2,-4,3,-2)}\in \operatorname{int}(D)$
So I came up with the solution eventually, hope it is right, it goes like this: First, let us be aware that for $\overline p_0=(1,2,-4,3,-2)\in\mathbb{R^5}\ x=1$ solves the polynomial $p(x)=x^4+2x^3+-4x^2+3x-2$ thus $\overline p_0\in D$ Now lets' define a function $F:\mathbb{R^6}\rightarrow \mathbb{R} \\F(a,b,c,d,e,x)=ax^4+bx^3+cx^2+dx+e$ It is easy to see that $F$ is $C^1(\mathbb{R^6}\times \mathbb{R}$) and $F(\overline p_0,1)=F((1,2,-4,3,-2),1)=0$ $D(F)=(\nabla F(a,b,c,d,e,x))^t=(x^4,x^3,x^2,x,1,4ax^3+3bx^2+2cx+d)^t$ with $rank(D(F))=1$ for all $(a,b,c,d,e,x)\in \mathbb{R^6} $ including our $(\overline p_0,1)$. now let us apply the implicit function theorem and it gives us that there exist a domain $V \times W \subset \mathbb{R^5} \times\mathbb{R}$ of $(\overline p_0,1)$ and a function $g\in C^1(V,\mathbb{R})\ ,\ g(V)\subset W $ in which that: $g(\overline p_0)=1$ $\forall (\overline p,x)\in(V\times W) \mid F(\overline p,x)=0 \Leftrightarrow g(\overline p)=x \Leftrightarrow F(\overline p,g(\overline p))=0$ and the second claim implies us that we've found a domain $V$ of $ \ \overline p_0$ in $D$.
Evaluating Dirichlet series
It's very easy to give analytic continuations for $f(s) = \displaystyle \sum_{n \geq 1}\dfrac{a(n)}{n^s}$ when $a(n)$ is a periodic function (for example, $i^n$ of $\zeta^n$ for $\zeta$ a root of unity). Suppose the coefficients are periodic with period $Q$. Then write $$\begin{align} f(s) &= \sum_{0 \leq c < Q}a(c)\sum_{n \geq 1} \frac{1}{(c + nQ)^s} \\ &= \sum_{0 \leq c < Q}a(c)\sum_{n \geq 1}\frac{1}{\left(\frac cQ + n\right)^sQ^s}\\ &= \sum_{0 \leq c < Q}\frac{a(c)}{Q^s}\zeta\left(s, \frac cQ\right), \end{align}$$ where $\zeta(s, \frac cQ)$ is the Hurwitz zeta function. And so these Dirichlet series are finite sums of Hurwitz zeta functions, which have known analytic continuations to the whole plane (and are sums of Dirichlet L-Series, and vice versa). I had originally stated that your interest was in terms of digamma functions, but that's because I was misremembering the special values of Hurwitz zeta functions. It happens to be that the constant term of the Hurwitz zeta function at the pole at $s = 1$ is given by a digamma, and when you have periodic coefficients whose sum over the period is $0$ (like with roots of unity), one has that $$ f(1) = \sum_{n \geq 1} \frac{a(n)}{n} = -\frac{1}{Q} \sum_{1 \leq c \leq Q} a(c) \psi\left(\frac cQ\right),$$ but this isn't quite what you're looking for. However, if you accept Hurwitz zeta functions as being "closed form", then perhaps this is all you need.
Understanding marginalization in Bishop's Pattern Recognition and Machine Learning
Definition of conditional probability $$p(b|a) = \frac{p(a,b)}{p(a)}$$ Just use first and last expression and rewrite it and you see it become the same as above definition. The wikipedia just uses a set theoretic expression for above.
For what values of $a$ does the equation $\frac{1}{1+\frac{1}{x}}=a$ have no solution for $x$.
So you can write $$x(1-a)=a$$ If $a=1$ we have $$x\times 0=1$$, which is impossible. For $a\neq 1$ we get $$x=\frac{a}{1-a}$$
Understanding fundamental principles of counting
The distinction is not between "independently" and "in succession". The real distinction is whether you are doing both, or you are doing just one. The number of ways to do one of the jobs is $m+n$ (either do the first job, which can be done in $m$ ways; or do the second job, which can be done in $n$ ways; total number of ways, $m+n$). The number of ways of doing both is $mn$, because you have $m$ ways of doing the first job, and $n$ ways of doing the second job, and you have to do both. Say you have $5$ pants and $3$ shirts. If you are giving one piece of clothing away, then you have $5+3$ ways of deciding which piece to give away. But if you are deciding what to wear, you need to pick a pair of pants, and a shirt. That gives you $5\times 3$ possible combinations. You can make the choices simultaneously (they don't have to be "in succession"). The "independence" clause is jut that choice of shirt/pants should not restrict your choice of pants/shirt (so you don't have a plaid shirt and a pair of striped pants that cannot be worn together...). What you choose for job one does not affect what you can choose for job two. So you have to think about what you are doing. I'm not sure if there are "keywords" that you should be looking at. See also this previous answer
How do I go about solving this calculus problem? Has to do with areas and sums.
Most likely $R_1$ is a right Riemann sum with one subdivision, $M_1$ is a midpoint Riemann sum with 1 subdivision, etc. See here: http://www2.seminolestate.edu/lvosbury/CalculusI_Folder/RiemannSumDemo.htm As an example, here is how to do $R_3$: Split the example into 3 intervals: (0,1), (1,2) and (2,3). Then, compute f(x) for each right number of the interval: $f(1)=2, f(2)=5, f(3)=10$. Multiply each by the length of the interval: $f(1)\times 1=2, f(2)\times 1=5, f(3)\times 1=10$. Finally, add them up: $2+5+10=17$.
Proof for sphere eversion
It's a question of proof acceptance: Let you impress by the famous video 'Outside In' by Bill Thurston and collaborators ... Or you may wish to invest more time and look into Scott Carter's book. Added in edit: A less informal & more technical reference is Bednorz & Bednorz, as proposed in Akiva Weinberger's comment to the OP.
Why are the areas non-positive?
What you can do to compute the area between the curve and the $x$ axis (y = 0) is split the integral. At $\pi/2$, the graph of $\cos x$ intersects the $x$ axis (the line $y = 0$). For values $0 \lt x \lt \pi/2$, $\cos x$ is above the $x$ axis. For values $\pi/2 \lt x \lt \pi$, $\cos x$ is below the line $y = 0$. To compute the area bound by the line $y = 0$ and $\cos x$, we split the integral into two integrals, subtracting the lower curve from the upper curve. $$\begin{align} \int_0^{\pi/2} (\cos x - 0)\,dx + \int_{\pi/2}^\pi (0 - \cos x)\,dx & = \int_0^{\pi/2} \cos x \,dx - \int_{\pi/2}^\pi \cos x \,dx \\ \\ &= 2 \int_0^{\pi/2} \cos x\,dx\end{align}$$
Problem solving the standard deviation for a stochastic variable
You have two issues: $\text{Var}(X)=4$ not $8$ so $\text{Var}(X_1+X_2+X_3+X_4+X_5)=20$ not $40$ $\text{Var}(W)=\dfrac{\text{Var}(5W)}{5^2}$ not $\dfrac{\text{Var}(5W)}{5}$
The equation $ pq = px + qy $ has more than one complete integral.
$$pq=xp+yq\quad ;\quad p=\frac{\partial z(x,y)}{\partial x}\quad ;\quad q=\frac{\partial z(x,y)}{\partial y}$$ If we look only for particular solutions there is no need for the characteristic method. Simple inspection draw to try a function on the form $z=f(xy)$ for example. $$p=yf'\quad;\quad q=xf'\quad;\quad xy(f')^2=x(yf')+y(xf')=2xyf'\quad;\quad f'=2$$ $$\boxed{z(x,y)=2xy+c}$$ This is sufficient to show that there is more than one solution $az=\frac12(y+ax)^2+b$ . Note that this solution $az=\frac12(y+ax)^2+b$ can be obtained directly in searching particular solutions on the form $z=f(\alpha y+\beta x)$. With the Charpit-Lagrange method the system of characteristic ODEs is : $$\frac{dx}{x-q}=\frac{dy}{y-p}=\frac{dp}{-p}=\frac{dq}{-q}=\frac{dz}{(x-q)p+(y-p)q}$$ Note that the last fraction is of no interest since this is the combination of the two first fractions and equivalent to the obvious $dz=p\,dx+q\,dy$. No need to repeat here the well known method which for example is explained in full details in this video : https://www.youtube.com/watch?v=U51hN02LYrw The solution $z=2xy+c$ is straightforward in observing that the system of ODEs is satisfied with $p=2y$ and $q=2x$: $$\frac{dx}{x-(2x)}=\frac{dy}{y-(2y)}=\frac{d(2y)}{-(2y)}=\frac{d(2x)}{-(2x)}$$ $\frac{\partial z}{\partial x}=2y\implies z=2xy+g(y)\quad$ and $\quad\frac{\partial z}{\partial y}=2x\implies z=2xy+h(x)$ $g(y)=h(x)\implies =c \quad\implies z=2xy+c$
$f(x) = \tan^{-1}(x)$ || Is there any easy method to show that the sequence [$f^n(0)$] is unbounded?
Direct method [no Taylor series here]. First we know that $$ f’(x) = \frac 1 {1+x^2} \implies (1 + x^2) f’(x) = 1. $$ Now take the $n$-th derivative: by the Leibniz rule, $$ (1 + x^2)f^{(n+1)}(x) + 2nx f^{(n)}(x) + n(n-1) f^{(n-1)}(x) = 0. $$ Plug $x = 0$ into the equation above, we have $$ f^{(n+1)} (0) + n(n-1) f^{(n-1)} (0) = 0. $$ Since $f^{(0)} (0)= 0, f^{(1)} (0) = 1$, $f^{(2n)} (0) = 0$, and $$ f^{(2n+1)}(0) = - 2n (2n-1) f^{(2n-1)}(0) = 2n(2n-1)(2n-2)(2n-3)f^{(2n-3)}(0) = \cdots = (2n)! (-1)^n. $$ This method is not as quick as the one using Taylor series, but feasible.
If we have a real line and $X$ is any subset of it, let $Y$ be a set such that $X\subseteq Y \subseteq \bar{X}$
As you said, we first show that $\bar{X}\subset\bar{Y}$. Let $z\in\bar{X}$. Let $\epsilon>0$. Since $x\in\bar{X}$, there is an $x\in X$ such that $|x-z|<\epsilon$. Note that $X\subset Y$, and hence, $x\in Y$ too. Thus, for any $\epsilon>0$, there is a $y\in Y$, such that $|z-y|<\epsilon$. I.e., $z\in\bar{Y}$. Therefore, $\bar{X}\subset\bar{Y}$. Using a symmetrical argument to the above, we have that $\bar{Y}\subset\bar{\bar{X}}=\bar{X}$. This follows since $Y\subset\bar{X}$. Hence, $\bar{Y}=\bar{X}$.
finding the coordinates of a point of intersection: 3d sphere and plane
Okay, so you first plug in z=4 (goes to 0): $(x-1)^2+(y-2)^2+=25$ You should recognize this as a circle centered at $(1,2)$ with radius $5$. Based on this, you can find different points. For example, you can have $(1,7,4)$ as a solution. If you got down to $x+y=8$, you went a little farther than necessary. The trick is to notice the circle. However, if you notice, I have 1+7=8
countable union of closed compact balls in locally compact metric space is open
Take a point $x$ in the set $\bigcup_{n=1}^\infty A_n$. It follows that there exists $n = 1,\ldots,\infty$ such that $x \in A_n$. By definition, $A_{n+1}$ contains the closed ball of radius $r(x)>0$ and center $x$, and so it also contains the open ball of radius $r(x)$ and center $x$. This open ball is contained in the set $\bigcup_{n=1}^\infty A_n$ and it contains $x$. Since $x$ is an arbitrary element of the set $\bigcup_{n=1}^\infty A_n$, it follows that this set is a union of open balls. So it is open.
Elegant proof that $\left|(1-x)(2-x)(3-x)\cdots(n-x)\right| < n!$ for all $0<x<n+1$?
Suppose $$ \mid (1-x) (2-x) \cdots (n-x) \mid &lt; n! $$ for all $0 &lt; x&lt; n+1$. If $0&lt; x &lt; n+1$, then $$ \mid (1-x) (2-x) \cdots (n-x) (n+1-x) \mid &lt; n! \mid n+1-x \mid &lt; (n+1)! $$ If $n+1&lt;x&lt;n+2$, then let $y=x-1$ so $n&lt;y&lt;n+1$ $$ \mid (1-x) (2-x) \cdots (n-x) (n+1-x) \mid = \mid (-y) (1-y) (2-y) \cdots (n-y) \mid \\ = \mid y \mid \mid (1-y) (2-y) \cdots (n-y) \mid\\ &lt; \mid y \mid n!\\ &lt; (n+1)! $$ That was the induction step, let's go back to the base case. $$ 0 &lt; x &lt; 0+1\\ 0 &lt; 1-x &lt; 1\\ \mid (1-x) \mid = (1-x) &lt; 1!=1 $$
Is it valid to write $\sin(x+iy)=\sin (x)\cos(iy)+\cos(x)\sin(iy)$
Hint. Continuing what you did, and using the comment from Pedro Tamaroff, $$\sin(x+iy)=\sin x\cosh y+i\cos x\sinh y\ .$$ But now read your question carefully: you are not asked for the real part of $\sin z$, but for the values of $z$ such that $\sin z$ is real. So, what can you say about $x$ and $y$ if the above expression is a real number?
Operations on formal power series in two variables.
Hint: Using the coefficient of operator $[y^n]$ to denote the coefficient of $y^n$ of a series we can extract the diagonal from $p(x,y)$ via \begin{align*} G(t)=[y^0]p\left(\frac{t}{y},y\right)=\sum_{n} a_{n,n}t^n \end{align*} This is discussed in R.P. Stanleys Enumerative Combinatorics section 6.3. Diagonals. Related information can be found in this paper. Interesting information about diagonalisation is also given in this MO post.
When is a multivector in Cl(n) invertible?
If anything else fails, you always have this tedious but straightforward method at hand. Let $A$ be any finite-dimensional unital algebra over a field $F$, say $\dim A=n$. Then, $A$ acts on itself by multiplication from the left, providing an algebra homomorphism $\varphi:A \rightarrow \operatorname{End}(A)$ to the algebra of $F$-linear endomorphisms of $A$. Explicitly, $\varphi(a)x = ax$. The kernel of $\varphi$ consists of elements $a\in A$ such that $\forall x\in A : ax=0$. Substitute $x=1$ to get $a=0$. Thus, $\varphi$ is injective, and $A$ is isomorphic to its image $\varphi(A) \subset \operatorname{End}(A)$. So, $a\in A$ is invertible iff $\varphi(a)$ is. Finally, $\varphi(a)$ is a linear operator on $A$, thus an $n\times n$-matrix in some basis of $A$, and invertibility of $\varphi(a)$ can be decided by computing the determinant $\det\varphi(a)$. Consider the case of $A = \operatorname{Cl}^{1,0}(\mathbb R)$ the Clifford algebra of a one-dimensional Euclidean space. It has a basis $\{1, e\}$ with $e^2 = 1$. An element $a + be \in A$ corresponds (in this basis) to a matrix $$\begin{pmatrix}a &amp; b \\ b &amp; a\end{pmatrix}$$ with determinant $a^2-b^2$. Indeed, whenever $a^2-b^2\neq 0$, we have $$(a+be)^{-1} = \frac{a-be}{a^2-b^2}$$ If $a^2-b^2 = 0$, then $(a+be)(a-be) = a^2 - b^2 = 0$, so $a+be$ is a zero divisor and cannot be invertible. As another example, take $A = \operatorname{Cl}^{0,1}(\mathbb R)$ the Clifford algebra of a one-dimensional space with negative-definite form. It is known that $A \cong \mathbb C$, the complex numbers, so every nonzero element should be invertible. Take the basis $\{1, e\}$ with $e^2=-1$. Now $a+be$ corresponds to $$\begin{pmatrix}a &amp; -b \\ b &amp; a\end{pmatrix}$$ with determinant $a^2+b^2$, which is zero iff $a=0$ and $b=0$.
Why is $(\sqrt{x^2})$ equal to $|x|$
$$\sqrt{x^2}=\begin{cases}x &amp; \text{if} \: x\geq0 \\ -x &amp; \text{if} \: x&lt;0 \end{cases} = |x|$$ Is it clear?
Find necessary and sufficent conditions for A to be bijective function
Hints: $1). $ if $A(x)=0,$ then taking inner products with each $e_k$ gives $\lambda_k\langle x,e_k\rangle=0.$ $2).\ $ Consider the map $e_k\mapsto \lambda_ke_k.$
Minimal Right Ideals in $\Bbb{C}[G]$
Since $R$ is semisimple every right ideal is of the form $eR$ and every left ideal is of the form $Rf$ for idempotents $e,f$. The minimal ones will be the ones generated by idempotents that cannot be written as a sum of two nonzero idempotents. added a few more notes On the off-chance it helps your computations, the minimal left ideals are exactly the direct complements of maximal left ideals. If you go so far as to decompose $R$ into a product of matrix rings over $\mathbb{C}$, then we can determine the minimal ideals completely in terms of that decomposition. The subset $L_j$ of matrices which are zero off of column $j$ are left ideals, and in fact are minimal left ideals. If memory serves, every minimal left ideal is of the form $L_j u$, for some $j$ and for some $u$ a unit of $R$.
Is there a simple formula for this lcm?
This is Landau's function. It can be written as $$ lcm(1,2,\ldots,n) = \prod_{p\le n} p^{\lfloor \log_p n \rfloor} $$ and so $$ lcm(1,2,\ldots,n) = e^{\psi(n)} $$ where $\psi$ is the second Chebyshev function: $$ \psi(x) = \sum_{p^k\le x}\log p = \sum_{p\le x}\lfloor\log_p x\rfloor\log p, $$ There is no explicit formula for $\psi$, but there are asymptotic results. For instance, $$ \psi(x) \sim x $$ which is equivalent to the prime number theorem. More detailed asymptotics are related to the Riemann Hypothesis.
Show non-convexity of a function with vector input
Observe that $d$ is $2\pi$-periodic in every axial direction. The only way that such a periodic function can be convex is if is constant.
If $\alpha\cdot \tilde{\alpha}\sim\beta\cdot \tilde{\beta}$ with $\tilde{\alpha}\sim\tilde{\beta}$, then $\alpha\sim\beta$? ($\sim=$homotopic)
Note that $$(\alpha\cdot\tilde{\alpha})\cdot\tilde{\alpha}^{-1}\sim(\beta\cdot\tilde{\beta})\cdot\tilde{\alpha}^{-1}\sim (\beta\cdot\tilde{\beta})\cdot\tilde{\beta}^{-1},$$ where $^{-1}$ means "reverse the path" (the first homotopy comes from $\alpha\cdot\tilde{\alpha}\sim\beta\cdot\tilde{\beta}$ and the second comes from $\tilde{\alpha}\sim\tilde{\beta}$ by just reversing the homotopy as well). Now concatenation is associative up to homotopy, the concatenation of a path with its inverse is homotopic to a constant path, and any path is homotopic to its concatenation with a constant path. Thus we get $$(\alpha\cdot\tilde{\alpha})\cdot\tilde{\alpha}^{-1}\sim \alpha\cdot(\tilde{\alpha}\cdot\tilde{\alpha}^{-1})\sim \alpha\cdot c\sim \alpha$$ where $c$ is the constant path at $\alpha(1)$. Similarly, $$(\beta\cdot\tilde{\beta})\cdot\tilde{\beta}^{-1}\sim\beta.$$ Putting this all together, we get $$\alpha\sim\beta.$$ Actually writing down an explicit homotopy is messy but not hard, and just involves using the proofs of all the facts used above (e.g., a specific homotopy that shows that concatenation is associative up to homotopy).
Does $\sum _{n=1}^{\infty } \frac13 (\frac{n+1}{n})^{n^2}$ converge?
The root test says if $\lim a_n^{1/n} &lt;1$ that the series is convergent. If it is $&gt;1$ then it is divergent. In this case, the limit $e$ is greater than $1$.
Prove that $A^{-1} = \frac{1}{2}(A+I)$
Hint: the characteristic polynomial of $A$ is $$ p_A(x)=(x-1)(x+2)=x^2+x-2$$ and $p_A(A)=0$ by the Cayley-Hamilton theorem.
Is it possible to swap only two elements on a 3x3 grid by swapping rows and columns?
Four numbers that are in the "corners" of a rectangle (like 1379 or 1278) will remain in the corners of a rectangle no matter how many or which swaps happen. So you cannot swap just two elements while keeping the rest fixed.
Why is spatial tensor called spatial?
When you represent your C*-algebras faithfully on Hilbert spaces, say $A \subseteq B(H), B \subseteq B(K)$, the spatial/minimal tensor product is just the norm closure of algebraic tensor product $A \odot B \subseteq B(H \otimes K)$. In my mind, this is why its called spatial - it is defined as some algebra of operators on the underlying Hilbert space $H \otimes K$. It just so happens that the resulting C*-algebra is independent of the choices of faithful representations and so you have &quot;the&quot; spatial tensor product or &quot;the&quot; minimal tensor product.
How to calculate the outward flux of a vector field through a cone?
By the divergence theorem, the flux equals $$ \phi=\iiint_E \nabla \cdot \vec{F}\; dV = 3 V(E), $$ and since $E$ is a cone with basis $A$ and height $h$: $$ \phi = 3 V(E)=3 \frac{A h}{3}=Ah. $$
Determine bijective conformal self maps of $\Bbb C \setminus \{0,1\}$
Let $f$ be an automorphism of of $\Bbb{C}\setminus\{0,1\}.$ Since $f$ is injective according to Picard theorems non of the points $0, 1$ can be an essential singularity. Therefore at these points we need to have either removable singularities or poles of order one. Moreover these are the automorphisms of Riemann sphere that fixes the set $\{0, 1, \infty\}$. Thus we have following cases for possible automorphisms $f$: Möbius transformations that fixes $0, 1$ and $\infty$ $$z$$ Möbius transformations that fixes $0$; interchange $1$ and $\infty$ $$\dfrac{z}{z-1}$$ Möbius transformations that fixes $1$; interchange $0$ and $\infty$ $$\dfrac{1}{z}$$ Möbius transformations that fixes $\infty$; interchange $0$ and $1$ $$1-z$$ Möbius transformations that sends $0\to1\to\infty\to 0$ $$\dfrac{1}{1-z}$$ Möbius transformations that sends $0\to\infty\to 1\to 0$ $$\dfrac{z-1}{z}$$ Also any composition of these Möbius transformations is also a candidate for such an automorphism, but here we have computed all such possible compositions. The set of these Möbius transformations form a subgroup of PGL ($2, \Bbb{C}$) called anharmonic group and it is isomorphic to the Dihedral group of order 6.
Divide unlimited balls into m different boxes
Consider the set $S=\{0,1,2,\dots K\}$. Now, there are $$m!\binom{K+1}{m}$$ ways to pick $m$ different numbers from this set (where $m!$ appears as order picked matters). If we correspond the first number we picked with box $1$, the second with box $2$, and so on, then there are $m!\binom{K+1}{m}$ ways to put a different number of balls in each box. Of course, we want the number of ways to put a different number of balls in each box such that the number of balls is increasing. Now, suppose we have picked a sequence of integers from $S$ $$T=a_1,a_2,\cdots a_m$$ There is one and only one way to rearrange this sequence such that it is increasing. For example, if $$T=3,5,2$$ then $T$ can be rearranged into $$2,3,5$$ Since the number of ways to rearrange a sequence is $m!$, the total number of ways to pick a sequence of $m$ elements from $S$ such that the sequence is increasing is $$\frac{m!\binom{K+1}{m}}{m!}=\binom{K+1}{m}$$
Combinatorics problem - selecting teams of two from two different groups
Your reasoning is sound, and your answer is correct. You could also have argued as follows, starting with your first observation. There are, as you say, $\binom85\binom{10}5$ ways to choose $5$ boys and $5$ girls. Now line up the boys. There are $5$ ways to pick a teammate for the first boy in line, $4$ ways to pick a teammate for the second boy in line, and so on, so there are $5!$ ways to match up the boys and the girls. That gives you a total of $\binom85\binom{10}55!=1\,693\,440$ different ways to form $5$ teams, each consisting of a boy and a girl.
Other formulation of the inverse Galois problem
You should write quotient: if $K$ is a finite Galois extension of $\mathbb{Q}$, then $\mathrm{Gal}(K/\mathbb{Q})$ is a quotient of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ by the Fundamental Theorem of Galois theory.
Finding $U(\mathbb{Z}[i])$.
The equation $ac−bd=1$ means a and b are coprimes. We want $\frac{-bc}{a} \in \mathbb{Z}$. Then, if $a \neq +-1$, then a|c, because we know a does not divide b.
Reduce Logical Expression
Supposing that $a \land b \lor c \leftrightarrow (a \land b) \lor c$, then the DNF of your expression can be found this way. $(\lnot (p \lor q) \land r) \lor \lnot (p \leftrightarrow (q \lor r) )$ $(\lnot p \land \lnot q \land r) \lor (p \oplus (q \lor r) )$ $(\lnot p \land \lnot q \land r) \lor (p \land \lnot (q \lor r) ) \lor (\lnot p \land (q \lor r) )$ $(\lnot p \land \lnot q \land r) \lor (p \land \lnot q \land \lnot r) \lor (\lnot p \land q) \lor (\lnot p \land r)$ And since $a \lor (a \land b) \leftrightarrow a$: $(p \land \lnot q \land \lnot r) \lor (\lnot p \land q) \lor (\lnot p \land r)$
How to find system of equations from solution space
Actually the 3 vectors are linearly independent. For example, it is not hard to check that $$\left|\begin{array}{cccc}1 &amp; -2 &amp; 4 &amp; 3 \\ 1 &amp; -1 &amp; 6 &amp; 4 \\ 3 &amp; -8 &amp; 8 &amp; 3 \\ 1 &amp; 0 &amp; 0 &amp; 0 \end{array}\right|=-32\ne0$$ So we expect there will be just one equation. Suppose it is that the vector $(w,x,y,z)$ must satisfy $aw+bx+cy+dz=0$. Then we know that $a,b,c,d$ must satisfy: (1) $a-2b+4c+3d=0$, (2) $a-b+6c+4d=0$, (3) $3a-8b+8c+3d=0$ Taking (2)-(1), we get $b+2c+d=0$, and taking 2(2)-(1), we get $a+8c+5d=0$. Substituting in (3) gives $d=0$. So the only possible equation is essentially $8w+2x-y=0$. It is easy to check that the three given points all satisfy it and hence all linear combinations of those points will also satisfy it.
How do i prove that $C_c(X)$ is a vector space?
And to say why the inclusion holds, just note that if $f(x) = 0$ and $g(x) = 0$ then $(f+g)(x) = 0$.
Angle between GPS coordinates
Your general procedure would be something like Start with a global rectangular earth-centered coordinate system. For each of point A and B, find the "base point" on the GPS/WGS84 reference ellipsoid that is directly below/above the point. The reference ellipsoid has an equatorial radius of $R_e = 6\,378\,137.0\;\rm m$ and a polar radius of $R_p = 6\,356\,752.314\;\rm m$. What the latitude $\phi$ and longitude $\lambda$ of the geographical coordinates specify is the direction of the local zenith, that is, the surface normal of the reference ellipsoid at the base point. There's some algebra and trigonometry to be done here -- unless I've made a mistake somewhere, the coordinates of the base point work out to $$ \bigl( R_e \cos(\lambda) \cos(\psi), R_e \sin(\lambda) \cos(\psi) , R_p \sin(\psi) \bigr) $$ where $\psi = \arctan\left( \frac{R_p}{R_e} \tan \phi \right)$. But better check for yourself that this makes sense. Find the actual points $A$ and $B$ in rectangular coordinates by going an appropriate distance straight upwards from the base point. Note that "upwards" means along the local normal defined by $\phi$ (not $\psi$) and $\lambda$ -- it is not straight away from the center of the earth. Subtract the resulting coordinates to find the direction of sight from A to B. Use a rotation matrix created from $(\phi_A, \lambda_A)$ to convert the difference vector $B-A$ into a local coordinate system at $A$ (with basis vectors pointing due east, north, up). Find your "pitch" and "yaw" by taking appropriate arctangents of the components of the line-of-sight in local coordinates. Beware that the mean sea level can be several tens of meters above or below the reference ellipsoid surface in different parts of the world. Some GPS receivers can be configured to use a mathematical model of the mean sea level (a "geoid" which may either be a one-size-fits-all global model or a more precisely defined model for use in a single region/country) to display altitudes relative to that one instead of the reference ellipsoid. If that is the case for your input data, you will need to undo the correction in order to produce ellipsoid-relative vertical coordinates for used in the calculations sketched above.
Existence of transverse homotopy between knots in a 3-manifold
Suppose $h : S^1 \times I \to \Sigma$ is a smooth homotopy between the knots $K_0 = h(-, 0)$ and $K_1 = h(-, 1)$. Consider the movie $f : S^1 \times I \to \Sigma \times I$ of the homotopy, defined by $f(x, t) = (h(x, t), t)$. I claim that $f$ is homotopic, relative to it's boundary, to a map $g$ which is an immersion with transverse double points, i.e., $g$ is an embedding away from finitely many points $x_1, y_1, \cdots, x_n, y_n \in S^1 \times I$ such that $g(x_k) = g(y_k) = p_k$ for $1 \leq k \leq n$ and $\text{im} \,dg_{x_k} \pitchfork \text{im} \, dg_{y_k}$. This follows from a self-transversality theorem that I have stated and given a sketch of proof below. As a matter of fact $g$ can be chosen to be $C^1$-close to $f$ as well. $f$ was completely horizontal to $\Sigma$, i.e., if we denote $f_t = f|S^1 \times \{t\}$ to be the restriction on the $t$-slice, $f_t' \subset (d\pi)^*T\Sigma$ where $\pi : \Sigma \times I \to \Sigma$ is the projection. Therefore $g_t (=g|S^1 \times \{t\})$ must be $\varepsilon$-horizontal to $\Sigma$, and in particular, $g_t' \notin \ker d\pi$. This would mean $\pi \circ g_t$ is an immersion for all $t \in I$. Note that this is almost enough for what you want: Take $H = \pi_\Sigma \circ g$ to be your new homotopy. This is a homotopy of $K_0$ to $K_1$ through immersions, and the only difficulty is that $H_t$ might stay on a singular knot for an interval's worth of time $t \in J \subset I$, whereas you want it to stay there for only finitely many points in $I$. Locally near a crossing, $(H_t)_{t \in I}$ would look like a pair of undercrossing-overcrossing which gradually grows closer, stays in the position of "$\mathsf{X}$" for the interval $t \in J$ and then grows apart and becomes a pair of overcrossing-undercrossing. We modify $H$ near those points so that it only stays like an "$\mathsf{X}$" for a point $t = t_0 \in J$. Then further modify $H$ so that double points are introduced one at a time, i.e, $H_t$ has at most one double point. Suppose $M$ is a compact $n$-manifold and $N$ is a compact $2n$-manifold. Let $C^\infty(M, N)$ be the space of smooth maps and $\text{Imm}(M, N)$ be the subspace of such maps which are immersions. Call a map $f \in \text{Imm}(M, N)$ to be an immersion with clean double points if whenever $x, y \in M$, and $f(x) = f(y) = p$, there exists charts $U, V$ around $x, y$ respectively such that $f|_U, f|_V$ are embeddings, $f(U)$ intersects $f(V)$ transversely, and $f(U) \cap f(V) = \{p\}$. Denote the subspace of such maps as $\text{Imm}_{\pitchfork}(M, N)$ Theorem (Self-transversality of maps): $\text{Imm}_\pitchfork(M^n, N^{2n})$ is dense in $C^\infty(M^n, N^{2n})$. Let $J^1(M, N)$ be the space of $1$-jets of maps $M \to N$ and $\mathscr{S} \subset J^1(M, N)$ be the subset of $1$-jets of non-immersions. The space of $1$-jets is an affine bundle $M_{2n \times n}(\Bbb R) \to J^1(M, N) \to M \times N$ with fiber keeping track of the formal derivative component. Let $\Sigma_{&lt; n} \subset M_{2n \times n}(\Bbb R)$ be the stratified subset of matrices of rank strictly less than $n$. As $\mathscr{S}$ fibers over $M \times N$ with fiber $\Sigma_{&lt; n}$, it is also a stratified subset of $J^1(M, N)$. Given any map $f \in C^\infty(M, N)$, consider it's 1-jet prolongation $j^1 f : M \to J^1(M, N)$. By Thom transversality theorem, we can homotope $f$ by a $C^1$-small homotopy to $g \in C^\infty(M, N)$ such that $j^1 g \pitchfork \mathscr{S}$. But observe that $\dim J^1(M, N) = n(2n+3)$, so $\text{codim}\, j^1g(M) = \dim J^1(M, N)$ $- \dim M$ $=$ $2n(n+1)$ whereas $\text{codim}\, \mathscr{S}$ $=$ $\text{codim}_{M_{2n \times n}(\Bbb R)} \Sigma_{&lt; n}$ $=$ $n+1$, so $\text{codim}\, j^1 g(M)$ $+$ $\text{codim}\, \mathscr{S}$ $&gt;$ $\dim J^1(M, N)$, forcing $j^1 g$ to be disjoint from $\mathscr{S}$. This implies $j^1 g$ has rank $n$ everywhere, i.e., $g$ is an immersion. This implies $\text{Imm}(M, N)$ is dense in $C^\infty(M, N)$. Now for any $f \in \text{Imm}(M, N)$ consider the map $F : M \times M \to N \times N$, $F(x, y) = (f(x), f(y))$. By a $C^1$-small homotopy we can modify $F$ to $G$ so that $G$ is transverse to the diagonal $\Delta_N \subset N \times N$. Let $G|\Delta_M$ be the restriction to the diagonal of $M \times M$. As $G$ is $C^1$-close to $F$, image of $G|\Delta_M$ fits inside an $\epsilon$-neighborhood of $\Delta_N \subset N \times N$, and by $\epsilon$-neighborhood theorem we have a normal projection map to $\Delta_N$, with which we compose to get a map $g : M \to N$. $g \times g$ is $C^1$-close to $G$ near $\Delta_M$, and since $G \pitchfork \Delta_N$, by stability of transverse maps, $(g \times g) \pitchfork \Delta_M$ should hold as well. Therefore $g \in \text{Imm}_\pitchfork(M, N)$. This implies we have a tower of successively dense subspaces $$\text{Imm}_\pitchfork(M, N) \subset \text{Imm}(M, N) \subset C^\infty(M, N)$$ In particular, $\text{Imm}_\pitchfork(M, N)$ is dense in $C^\infty(M, N)$, as required. (Note that for the above we need a relative-to-boundary version of this, but once $f$ is already an embedding restricted to (a collar neighborhood of) the boundary, we can do all the required homotopies fixing the boundary.)
Find all natural solutions $(a, b)$ such that $(ab - 1) \mid (a^2 + a - 1)^2$.
Let $(a,b)$ be a solution of $$ (ab - 1) \mid (a^2 \pm a - 1)^2$$ then $(b,c)$ is a solution of $$ (bc-1) \mid (b^2 \mp b - 1)^2,$$ where $$c=\frac{(b^2\mp b-1)^2+ab-1}{b(ab-1)}.$$ Proof Let $N=ab-1$. Then $0\equiv (a^2b^2\pm ab^2-b^2)^2\equiv (b^2\mp b-1)^2$ (mod $N$). Therefore $(b^2\mp b -1)^2=NM$ for some natural number $M$. Then $M\equiv -1$ (mod $b$) and so there is a natural number $c$ such that $M=bc-1.$ Application This gives us a formula for generating solutions with every other iteration giving a solution of the original equation. Solutions $(a,a+1)$ just cycle round but the solution $(2,1)$ generates an infinite set of solutions containing those obtained by @MartinR. In fact all solutions other than $(a,a+1)$ are generated from $(2,1)$. A proof there are no other solutions Let $$(a^2\pm a-1)^2=(ab-1)(ac-1).$$ If either $b$ or $c$ is less than $a$ then the described procedure can be used to give us a smaller solution. Otherwise we have $b,c\ge a$. If $b=a$ then, for $N=ab-1$, we have $a^2\equiv 1$ (mod $N$) and then $a\equiv 0$ (mod $N$). The only possibility is $N=1$ and we have reached the base case. Otherwise $b,c\ge a+1$ and the only possibility is $b=c=a+1$. The solutions are as follows $$\begin{matrix} (2, 13)&amp;&amp; (13, 74)\\ (74, 433)&amp;&amp; (433, 2522)\\ (2522, 14701)&amp;&amp; (14701, 85682)\\ (85682, 499393)&amp;&amp; (499393, 2910674)\\ (2910674, 16964653)&amp;&amp; (16964653, 98877242)\\ \end{matrix}$$ $$\cdots$$ As described in the above answer every other pair gives a solution of the original equation. The remaining pairs if reversed also give solutions but these simply form part of the other solutions. E.g. $(2,13)$ and $(74,433)$ are successive solutions. $(74,13)$ is also a solution which is the 'other half' of the $(74,433)$ one.Viz. $$(74^2+74-1)^2=(74\times 13-1)(74\times 433-1).$$
Find coefficients of polynomial, knowing its roots are consecutive integers
The brute force method that doesn't require any formulas or theorems would be like so: You know that there are $3$ consecutive zeroes $(n-1)$, $n$, $(n+1)$. It's important to pick them like so as opposed to $n$, $(n+1)$, $(n+2)$ because this will save you a lot of algebra. $0=(n-1)^3-15(n-1)^2+a(n-1)+b $ $0=n^3-15n^2+an+b $ $0=(n+1)^3-15(n+1)^2+a(n+1)+b $ $\color{red}{\text {This is how to start it. Below is the answer, so if you just wanted a hint you can stop here.}}$ Get rid of the $b$ first because it's easy. To do this, subtract the middle equation from the other two, producing two equations with $a$ and $n$ as unknowns: $$0-0=[(n-1)^3-15(n-1)^2+a(n-1)+b ]- (n^3-15n^2+an+b )$$ Which simplifies to $$0=-a-3n^2+33n-16 $$ And $$0-0 = [(n+1)^3-15(n+1)^2+a(n+1)+b ] - (n^3-15n^2+an+b )$$ Which simplifies to $$0=a+3n^2-27n-14 $$ So we have the system $$0=-a-3n^2+33n-16 $$ $$0=a+3n^2-27n-14 $$ Now add these $2$ equations together, and get $$0=6n-30$$ Therefore $$n=5$$ Now that you have $n$, you know that the other solutions are of the form $n \pm 1$, so the $3$ roots are $4, 5, 6$. Now plug just two of these into your cubic (maybe the two smallest ones) and you will have two equations with $a$ and $b$ as unknowns. $$0=4^3-15\cdot4^2+4a+b$$ $$0=5^3-15\cdot5^2+5a+b$$ Simplifying, $$0=4a+b-176$$ $$0=5a+b-250$$ Solving this system, we get $$a=74, \ b=-120$$ EDIT: As noticed by mathguy in the comments below, we can cut down on the work required. Once we find $n$, we can use the equation $$0=a+3n^2-27n-14 $$ to find $a$: $$0=a+3 \cdot(5)^2-27 \cdot (5) -14$$ $\implies$ $a=74$ Now we can use one of the original $3$ equations (the one with $n-1$ for the easiest computation?) to find $b$: $$0=(5-1)^3-15(5-1)^2+74(5-1)+b $$ $\implies$ $b = -120$ $$a=74, \ b=-120$$
Use the central limit theorem to approximate a probability of a sum of Poisson random variables.
The sum of independent Poisson random variables is Poisson. So that much is rock solid. However, the normal approximation to Poisson works best for large $\lambda$ and 10 is not quite large enough for most applications of the CLT. Nowadays software simplifies exact computation. The exact result (to four places) from R statistical software is $0.0835 \ne 0.0571:$ 1 - ppois(14, 10) ## 0.08345847 Your normal approximation could be improved by a "continuity correction" and interpolation in using printed normal tables (or using software for the normal value). You can read about the continuity correction, or look at the figure to see why you want the approximating area to lie under the normal curve and above 14.5. Notice that $P(X \ge 15) = P(X &gt; 14.5) = P(X &gt; 14)$ because the discreteness of the Poisson random variable. 1 - pnorm(14.5, 10, sqrt(10)) ## 0.07736446
Finding the Taylor polynomial of $f(x) = \frac{1}{x}$ with induction
We will prove that if $f:x \mapsto \dfrac{1}{x}$ then $f^{(n)}(x)=\dfrac{(-1)^n n!}{x^{n+1}}$, for every $n \in \mathbb{N}^*$. For $n=1$ we have $f'(x)=\dfrac{-1}{x^2}=\dfrac{(-1)^1 \cdot 1!}{x^{1+1}}$. If we asume tht $f^{n}(x)=\dfrac{(-1)^nn!}{n^{1+n}}$ then $$f^{(n+1)}(x)=\left[ \dfrac{(-1)^n \cdot n!}{x^{n+1}}\right]^´=(-1)^nn! \cdot \dfrac{-(n+1)}{x^{n+2}}=\dfrac{(-1)^{n+1}(n+1)!}{x^{(1+n)+1}},$$ hence by induction the result follows. Finally, the Taylor series of $x \mapsto 1/x$ about the point $a=1$ is $$T_n(x)=\sum_{k=0}^{\infty} \dfrac{\frac{(-1)^kk!}{1^{k+1}}}{k!}(x-1)^k=\sum_{k=0}^{\infty}(-1)^k(x-1)^k.$$
Struggling to solve $\ w^2 + 2iw = i \ $
Your equation is a quadratic equation. Apply the quadratic formula.
Number of real roots of a polynomial
Sturm's theorem is a very powerful tool used to count real roots, but may be overkill here since your method also works and is arguably simpler. We take $f(x)$ and $f^\prime(x)$ and then compute successive (negated) remainders $r_1(x),r_2(x),\dots$ of polynomial long division (similar to the Euclidean algorithm) to get the sequence of polynomials $f(x), f^\prime(x), r_1(x), \dots$. So if $f(x) = x^5 + x^3 - 2x +1$ and $f^\prime(x) = 5x^4 + 3x^2 - 2$, then $$f(x) = \frac{x}{5} \cdot f^\prime(x) - (-\frac{2}{5}x^3+\frac{8}{5}x - 1)$$ $$f^\prime(x) = (-\frac{25}{2}x)(-\frac{2}{5}x^3 + \frac{8}{5}x - 1) - (-23x^2 + \frac{25}{2}x + 2)$$ $$-\frac{2}{5}x^3 + \frac{8}{5}x - 1 = (\frac{2}{115}x + \frac{5}{529})(-23x^2 + \frac{25}{2}x + 2) - (-\frac{1531}{1058}x + \frac{539}{529})$$ $$-23x^2 + \frac{25}{2}x + 2 = (\frac{24334}{1531}x+\frac{5984577}{2343961})(-\frac{1531}{1058}x + \frac{539}{529}) - (+\frac{1409785}{2343961})$$ Now, for really large numbers, this sequence of polynomials gives the signs $+,+,-,-,-,+$, which has $2$ sign changes. For really large negative numbers, the sequence has the signs $-,+,+,-,+,+$, which has $3$ sign changes. The single additional sign change means that $f(x)$ has a single real root.
Radius of convergence of a general series
Let $x^2=y$. Note that if $|y|\gt R$, then $\sum a_ny^n$ diverges, and if $|y|\lt R$, then $\sum a_ny_n$ converges. Thus if $x^2\gt R$, then $\sum a_n x^{2n}$ diverges, and if $x^2\lt R$, then $\sum a_nx^{2n}$ converges. Thus if $|x|\gt\sqrt{R}$, we have divergence, and if $|X|\lt \sqrt{R}$, we have convergence, and therefore the radius of convergence is $\sqrt{R}$.
Learning vector spaces and matrices
it will hardly be possible to study one without the other. Matrices are, in most cases, just vectors in $K^{n\times n}$ and if you have a linear morphism of finite dimensional $K$-vector spaces $$f:V \rightarrow W$$ you can always express $f$ in terms matrices with respect to chosen bases of $V,W$ respectively. If you know, how $f$ acts on a basis, you know already how $f$ acts on the entire space. You can express this information using matrices. To get an algebraic understanding for matrices, just start with with some basic book on linear algebra. Michael Artins book on Algebra might be a nice choice. It is always good to start that way, even if you need matrices for computational aspects in discrete mathematics or physics.
Always, Sometimes, Never Question
Try a = 1/2 and b = 1/3 that should work
How calculate $\pi$ to an accuracy of 10 decimal places?
Hint: $\pi$ has an irrationality measure of no more than $7.6063$.
Embedding for homogeneous Sobolev spaces
If $\Omega$ is a bounded domain, then we have the following Poincaré inequality: There is a constant $C&gt;0$ such that $$ \int_{\Omega}|u|^p\leq C\int_{\Omega}|\nabla u|^p $$ for all $u\in C_c^\infty(\Omega)$. In other words, $\|\nabla\cdot\|_p$ and $\|\cdot\|_p+\|\nabla\cdot\|_p$ are equivalent norms on $C_c^\infty(\Omega)$ and thus $W^{1,p}_0(\Omega)=\dot{W}^{1,p}_0(\Omega)$ with equivalent norms. Hence you can apply the classical embedding theorems for Sobolev spaces that Evan William Chandra mentioned in the comments to get the embedding you want.
An elementary problem in Group Theory: the unique noncyclic group of order 4
Here is a full solution: This group has only one element of order 1 ($a^1 = a$ so if $a^1=1$, then $a=1$). This group has no elements of order 4 or greater (4 is outlawed by hypothesis; order 5 or greater implies that $a^1,a^2,a^3,a^4,a^5$ are 5 distinct elements of a 4 element set). This group has no elements of order 3 (if $a$ has order 3, then $a^2 \neq 1$ and $a^2 \neq a$, so $a^2 =b$ (or $c$, but WLOG we choose $b$). Hence $ab=ba=1$ and $b^2=a$. What about $ca$? $ca=1$ implies $c=b$. $ca=a$ implies $c=1$. $ca=b$ implies $c=a$. $ca=c$ implies $a=1$. Oh no!) Therefore all elements have order 2, and we finish exactly as you did.
Give example of a distribution.
For the second problem, we can cheat, and let $X=-1$ with probability $p$, and $X=1$ with probability $1-p$. The random variables $X$ and $\dfrac{1}{X}$ not only have the same distribution, they are the same random variable. Now that we are cheating, let's go all the way. Let $X=1$ with probability $1$. For cheating a little less, let $X$ take on the values $\dfrac{1}{2}$ and $2$ each with probability $\frac{1}{2}$. Then $X$ and $\dfrac{1}{X}$ are not the same random variable, but they have the same distribution. A continuous distribution is more challenging. But for example a random variable $X$ with density function $f_X(x)=\dfrac{1}{x}$ for $e^{-1/2}\le x\le e^{1/2}$ and $f_X(x)=0$ elsewhere has the desired property.
Determine whether the following is a subspace of the indicated vector space
To show it is subspace, you have to show that the zero vector (in this case, the vector = (0,0)) is in $U = \{ (x_1,x_2) : x_1 - 5x_2 = 0 \} \subset \mathbb{R}^2$, that if you take two vectors ${\bf x} \in U$ and ${\bf x'} \in U$ then ${\bf x+x'} \in U$ and if $\alpha \in \mathbb{R}$ then $\alpha {\bf x} \in U$. Clearly, $0-5\cdot 0 = 0$ thus the vector $(0,0) \in U$. Now, take two vectors in $U$, say ${\bf x} = (x_1,x_2)$ and ${\bf x'} = (x_1',x_2') \in U$. This means that $x_1 - 5x_2 = 0$ and $x_1' - 5x_2' = 0$ Now, I want to show that ${\bf x + x'} \in U$. In other words, I want to show that $(x_1+x_1')-5(x_2+x_2')=0$. Indeed, \begin{align*} (x_1+x_1')-5(x_2+x_2') &amp;= x_1+x_1' - 5x_2-5x_2' \\ &amp;= (x_1-5x_2) + (x_1'-5x_2') \\ &amp;= 0 + 0\\ &amp;= 0 \end{align*} Now, if $\alpha$ is a scalar, and ${\bf x} \in U$ can you show that $\alpha {\bf x}$ is in $U$? That is, you want to show that $\alpha x_1 - 5 (\alpha x_2) = 0$. Isn't it obvious?
Time invariace of a linear system dependent on a particular time instant
A time-invariant system reacts to a shifted input by an equal shift in its response, i.e. if $y[n]$ is the system's response to $x[n]$, then for the system to be time-invariant, its response to $x[n-k]$ must be $y[n-k]$ for some integer $k$. Now let's assume that $y[n]$ is the response to $x[n]$ and let's see what happens if we use $x_1[n]=x[n-k]$ as an input signal: $$y_1[n]=x_1[n]+35x_1[n-1]+x_1[0]=x[n-k]+35x[n-k-1]+x[-k]$$ which is not equal to $y[n-k]$, so the system is not time-invariant.
Helpful Integrals for evaluating Fourier series, my book is wrong?
A change of variables may help. For example, $$ \int_{0}^{T}\sin(n\omega t)\,dt = \frac{1}{n\omega}\int_{0}^{T}\sin(n\omega t)d(n\omega t)=\frac{1}{n\omega}\int_{0}^{n\omega T}\sin(u)\,du. $$ Therefore, if $n\omega T$ is a multiple of $2\pi$, you'll get $0$ for the integral of $\sin(n\omega t)$ and of $\cos(n\omega t)$. Otherwise, one of the integrals (sin or cos) may not be $0$. So you are looking at $x5(t)$ with a period of $T_{0}=2T$. That means computing $$ \begin{align} a_{n}&amp; =\frac{1}{T}\int_{0}^{2T}x5(t)\cos(2\pi n t/2T)\,dt \\ &amp; =\frac{1}{T}\int_{0}^{T}\{1-0.5\sin(\pi t/T)\}\cos(\pi n t/T)\,dt \\ b_{n}&amp; =\frac{1}{T}\int_{0}^{2T}x5(t)\sin(2\pi n t/2T)\,dt \\ &amp; =\frac{1}{T}\int_{0}^{T}\{1-0.5\sin(\pi t/T)\}\sin(\pi n t/T)\,dt. \end{align} $$ I like being explicit about the constants instead of trying to unravel them later. The expression of interest to you is $$ \begin{align} b_{n1} &amp; = \frac{1}{T}\int_{0}^{T}\sin(\pi nt/T)\,dt \\ &amp; = \left.-\frac{1}{n\pi}\cos(n\pi t/T)\right|_{0}^{T} \\ &amp; = -\frac{1}{n\pi}(\cos(n\pi)-1)=\frac{1}{n\pi}(1-(-1)^{n}). \end{align} $$ It looks like you and I agree on $b_{n1}$. For the other part, you have to reduce $$ \begin{align} \sin(\pi t/T)\sin(\pi n t/T) &amp;=\frac{1}{2}\left[\cos(\pi t/T-\pi nt/T)-\cos(\pi t/T+\pi nt/T)\right] \\ &amp; = \frac{1}{2}\left[\cos((n-1)\pi t/T)-\cos((n+1)\pi t/T)\right]. \end{align} $$ Your $b_{n2}$ has the front constant out front. There's a $0.5$ from the original problem, another $1/2$ from the trig identity, and a division by $T$ from the Fourier normalization. So, $$ b_{n2}=\frac{1}{4T}\int_{0}^{T}\left\{\cos((n-1)\pi t/T)-\cos((n+1)\pi t/T)\right\}\,dt $$ You have to be careful for the case where $n=1$ because $\cos((n-1)\pi t/T)$ is really the constant function $1$ in that case.
Number of 3-colorings of an n-length cycle
HINT: Number the vertices of the cycle $1$ through $n$ in cyclic order. If vertices $1$ and $n-1$ have different colors, you can remove vertex $n$ to get a properly colored $(n-1)$-cycle. Conversely, given a properly colored $(n-1)$-cycle, you can insert a vertex $n$ between vertices $1$ and $n-1$, and there is only one possible way to color it. That accounts for $A(n-1)$ properly colored $n$-cycles, those in which vertices $1$ and $n-1$ have different colors. To finish the job, you need to show that there are $2A(n-2)$ properly colored $n$-cycles in which vertices $1$ and $n-1$ are the same color. Consider removing vertices $n$ and $n-1$.
Wrong proof for the variance of a sum of normally-distributed variables?
Believe it or not, this is pretty much correct. The more systematic mathematical way of doing it is as a formal change of variables: $$ W = X+Y\\Z = \frac{BX-AY}{\sqrt{AB(A+B}}.$$ The key thing Taylor glosses over here is that since the change of variables is linear, the Jacobian is just a number and only has a trivial effect on the joint distribution $f_{W,Z}.$ Then once you've used the change of variables formula to get $f_{W,Z}$ (up to an unimportant normalization constant), just integrate over $Z,$ to get the distribution for $W$ you're interested in. Like Taylor says, this integral just contributes another trivial constant $W$ and $Z$ wind up being independent.
Question about Circulant Matrices and Isomorphisms
The answer is yes. Of course, you do need to confirm that your mapping is in fact an isomorphism, but this is straightforward. First establish that the mapping $f$ is a homomorphism. Then show that the kernel of your homomorphism is $H$. Since you are dealing with elements of $GL_3(\mathbb R)$, this can be done by direct multiplication of your two different forms. I should point out that you are using an unconventional definition for a circulant matrix. Elements of your group $H$ fit the standard definition of a circulant matrix while elements of your group $G$ have a more general form.
Estimation with an orthonormalbasis in some finite dimensional subspace of $L_2(\Omega)$
Perhaps one way to see this is to notice that if $f \in L^2$, then you can write $f = f_{//} + f_{\perp}$ where $f_{//} \in S_N$ and $f_{\perp} \in S_N^{\perp}$. In particular, $\|f\|_{L^2}^2 = \|f_{//}\|_{L^2}^2 + \|f_{\perp}\|^2_{L^2}$, which implies that $\|f\|_{L^2}^2 \geq \|f_{//}\|_{L^2}^2$. Notice that $\langle f, \varphi_{j,N}\rangle = \langle f_{//}, \varphi_{j,N} \rangle$ and so $$\sum_{j=1}^N \langle f, \varphi_{j,N}\rangle = \sum_{j=1}^N \langle f_{//}, \varphi_{j,N} \rangle = \|f_{//}\|_{L^2}^2 \leq \| f\|_{L^2}^2$$
Pairing higher forms of a Lie group with the universal enveloping algebra
(Nothing I'm about to say requires $G$ to be compact.) $\mathfrak{g}$ can be interpreted as the left-invariant vector fields on $G$, which means it acts on the space of differential forms $\Omega^{\bullet}(G)$ via the Lie derivative. Since this is a Lie algebra action, it extends to an action of $U(\mathfrak{g})$ by differential operators on $\Omega^{\bullet}(G)$. This action restricts to an action on left-invariant differential forms, which can be identified with $\wedge^{\bullet}(\mathfrak{g}^{\ast})$, so we get an induced action of $U(\mathfrak{g})$ on the exterior algebra $\wedge^{\bullet}(\mathfrak{g}^{\ast}).$ This action does not extend the pairing between $\mathfrak{g}$ and $\mathfrak{g}^{\ast}$, which makes no use of the Lie bracket; instead it comes from thinking of $\mathfrak{g}^{\ast}$ as the coadjoint representation (the dual to the adjoint representation), which in particular means that it takes an element $X \in \mathfrak{g}$ and an element $v \in \mathfrak{g}^{\ast}$ and produces an element $Xv \in \mathfrak{g}^{\ast}$, not a scalar. The global version of this is that the Lie derivative of a $1$-form is another $1$-form.
CDF and PDF of a likelihood ratio random variable?
The distribution is $\chi^2$ only asymptotically and only if $H_0$ is true, finite sample distribution generally depends on distribution of $\{Y_n\}_{n=1}^{N}$. The number of degrees of freedom depends on hypothesis. For simple $H_0: \theta = \theta_0$, it is the number of dimensions in $\theta$, for general $g(\theta) = 0$ it should be like the rank of Jacobian of $g(\theta)$.
In a noetherian integral domain every non invertible element is a product of irreducible elements
Let $X$ be your set of nonzero, nonunits which cannot be written as a product of irreducibles. Towards a contradiction, suppose $X\neq\emptyset$, and pick $x_0\in X$. Then $x_0$ itself is not irreducible, so we may write $x_0=xy$, where $x$ and $y$ are nonzero nonunits. If both $x$ and $y$ can be written as a product of irreducibles, then $x_0$ can as well, a contradiction. So at least one of $x$ and $y$ is not a product of irreducibles, say $x$. Call it $x_1\in X$. Then $(x_0)\subset (x_1)$. Continue this process to yield an ascending chain $$ (x_0)\subset (x_1)\subset\cdots $$ in $R$. Now use the fact that $R$ is Noetherian to find a contradiction. Since $R$ is Noetherian, $(x_n)=(x_{n+1})$ for some $n$. By our construction, $x_n=yx_{n+1}$ for $y$ a nonunit. But then $x_{n+1}\in (x_n)=(yx_{n+1})$, so for some nonzero $z\in R$, we have $x_{n+1}=zyx_{n+1}$. By cancellation, since we are in a domain, $1=yz$, so $y$ is a unit, a contradiction. So $X=\emptyset$.
Endomorphism ring of a Noetherian module
More generally, if $A$ is a commutative ring and $M$ and $N$ are $A$-modules with $M$ finitely generated and $N$ Noetherian, then $\operatorname{Hom}_A(M,N)$ is a Noetherian $A$-module. Indeed, if $M$ is generated by $x_1,\dots,x_n$, then $\operatorname{Hom}_A(M,N)$ is isomorphic to a submodule of $N^n$, by mapping a homomorphism $f$ to $(f(x_1),\dots,f(x_n))$. Since $N$ is Noetherian, so is any submodule of $N^n$, so $\operatorname{Hom}_A(M,N)$ is Noetherian.
Quotient spaces in linear algebra
(1) An equivalence relation is a relation $\sim$ which satisfies the following conditions: $$x \sim x \qquad \mbox{ for all } x \tag{reflexive}$$ $$x \sim y \iff y \sim x \tag{symmetric}$$ $$x \sim y \mbox{ and } y \sim z \implies x \sim z \tag{transitive}$$ If you replace $\sim$ with $=$, these three properties hold, so we might informally say that an equivalence relation is a relation which is 'like' equality. In your case, $x \sim y \iff x-y \in U$, so saying $v$ and $w$ are linked by the equivalence relation $\sim$ means (geometrically) that if we consider the hyperplane $U+v$ of all vectors which have the form $u+v$ for $u \in U$, then $w$ is in this hyperplane. (2) The equivalence class $[v]$ is the set of all vectors $w$ such that $v \sim w$. In fact, $V/U$ is not an equivalence class: it is the vector space of all equivalence classes $[v]$, where $v \in V$. To see why $V/U$ is a vector space define addition and scalar multiplication by $$[v] + [w] = [v+w]$$ $$k[v] = [kv]$$ It is not entirely obvious that these operations are well defined: if $v' \in [v]$ (that is, if $v' \sim v$) and $w' \in [w]$, then $[v'] = [v]$ and $[w'] = [w]$, so we would need to show that $[v'+w'] = [v+w]$ and that $[kv'] = [kv]$ (that is, that the value of $[v] + [w]$ and $k[v]$ does not depend on the representative we choose for $[v]$ and $[w]$). However, we can prove that if $v \sim v'$ and $w' \sim w$, then $$kv' \sim kv$$ and $$v'+w' \in v + w$$ (I'll leave the details to you. Just work using the definition of $\sim$). From this, we have that $[v'+w'] = [v+w]$ and $[kv'] = [kv]$ by transitivity.
Asymptotic growth of $\frac{n}{\phi(n)}$.
If $n=p^a$ then you can compute easily $\varphi(n)$ and $n/\varphi(n)$, and $a$ is a kind of log. If $n$ is arbitrary, use the fact that $\varphi$ is multiplicative. This is not the final answer, but it is a start..
Let $\mathcal S$ be the collection of all straight lines in the plane $\mathbb R^2$. If $\mathcal S$ is a subbasis for a topology ...
Let $\mathscr T$ be a topology on a non-empty set $X$ and let $\mathscr S\subseteq \mathscr T$. Then, by definition, $\mathscr S$ is a subbasis for the topology $\mathscr T$ if for any $U\subseteq X$, one has that $U\in\mathscr T$ if and only if $U$ can be expressed as a union of sets which are finite intersections from sets in $\mathscr S$. Formally: $$U=\bigcup_{\gamma\in\Gamma}\bigcap_{i\in F_{\gamma}} S_i^{\gamma},$$ where $\Gamma$ is an (arbitrary) index set, for each $\gamma\in\Gamma$, $F_{\gamma}$ is a finite index set, and for each $\gamma\in\Gamma$ and $i\in F_{\gamma}$, it holds that $S_i^{\gamma}\in\mathscr S$. In the concrete example, note that any set consisting of a single point in $\mathbb R^2$ can be expressed as the intersection of two lines, which lines are in $\mathscr S$ by assumption (in this case, the index set $\Gamma$ has a single element $\gamma$, and the index set $F_{\gamma}$ has two elements, corresponding to the two lines). This implies that every one-point set in $\mathbb R^2$ is open according to this new topology $\mathscr T$, so that $\mathscr T$, in fact, must be the discrete topology (in which every subset is open). In the light of your question: However, $\mathbb R^2$ is Hausdorff, and so singleton sets (i.e points) are closed? Two comments: Firstly, the new topology $\mathscr T$ is different from the standard Euclidean topology. This is a common source of confusion, since we are so used to working with the usual Euclidean topology on $\mathbb R^2$ that we have a strange feeling upon seeing that nothing prevents us from defining any other topology on $\mathbb R^2$. Therefore, that the Euclidean topology is Hausdorff does not say anything about whether the discrete topology $\mathscr T$ is Hausdorff or not. That said, secondly, the discrete topology on $\mathbb R^2$ does happen to be Hausdorff, so that it is also $T_1$ (i.e., the singletons are closed). This is compatible with the fact that singleton sets are open in the discrete topology, since any subset of the space is both open and closed under this topology.
How can you disprove the statement $4=5$?
Hold up 4 fingers on your left hand. Hold up 5 fingers on your right hand (yes, count your thumb as a finger). Show that no matter how you try to pair them off, there's always one left over on your right hand. For added effect, make it the middle finger.
What proportion of positive integers have two factors that differ by 1?
Every even number has consecutive factors: $1$ and $2$. No odd number has, because all its factors are odd. The probability is $1/2$.
one dimensional real projective space
I don't have a picture for you, so you'll have to use your imagination, but here goes: Take the circle, twist it into a figure 8. As a first step, we notice that one of the circles in the figure is the upper semicircle, while the other one is the lower semicircle. We can also notice that the waist consists of two overlapping points, the endpoints of the semicircles. These we identify with each other. Now fold the figure 8 at the waist so the two circles overlap. At every point, there are now two points above one another. Identify them with each other, and you're done.
Name for a continuous surjection such that $\operatorname{cl}(f^{-1}(A)) = f^{-1}(A') \implies \operatorname{cl}(A) = A'$
These are quotient maps. Proof: We want to show the equivalence of these two conditions: For all $A,A'\subseteq X'$, $\operatorname{cl}(f^{-1}(A)) = f^{-1}(A') \implies \operatorname{cl} A = A'$. For all $A\subseteq X'$, $A$ is closed iff $f^{-1}(A)$ is closed. (1 $\Rightarrow$ 2) Suppose (1). Let $A\subseteq X'$. The implication $$ \text{$A$ is closed} \implies \text{$f^{-1}(A)$ is closed} $$ holds because $f$ is continuous. The reverse implication is a special case of (1): $$ \operatorname{cl}(f^{-1}(A)) = f^{-1}(A) \implies \operatorname{cl} A = A $$ (2 $\Rightarrow$ 1) Suppose (2). Let $A,A'\subseteq X'$, and suppose $\operatorname{cl}(f^{-1}(A)) = f^{-1}(A')$. Since $f$ is continuous, $$ f(f^{-1}(A)) \subseteq f(\operatorname{cl}(f^{-1}(A))) \subseteq \operatorname{cl}(f(f^{-1}(A))) $$ Applying the hypothesis yields $$ f(f^{-1}(A)) \subseteq f(f^{-1}(A')) \subseteq \operatorname{cl}(f(f^{-1}(A))) $$ Since $f$ is surjective, this means $$ A \subseteq A' \subseteq \operatorname{cl}(A) $$ But by hypothesis $f^{-1}(A')$ is the closure of some set, so it is closed; by (2), $A'$ is closed; so this yields $A' = \operatorname{cl}(A)$.
problem with eigenvalues and eigenvectors_2
Let's try a shortcut. The equation of the kernel tells you that $A \begin{pmatrix} x \\ x+2z \\ z \end{pmatrix} = 0$. The eigenvalue equation tells you that $A \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} = \dfrac 1 2 \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}$. Taking $x = z = 1$ gives you $0 = A \begin{pmatrix} 1 \\ 3 \\ 1 \end{pmatrix} = A \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} + A \begin{pmatrix} 0 \\ 3 \\ 0 \end{pmatrix} = \dfrac 1 2 \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} + 3 A e_2$, whence $A e_2 = - \dfrac 1 6 \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} = - \dfrac 1 6 e_1 - \dfrac 1 6 e_3$. Taking now $x=0$ and $z=1$ gives $0 = A \begin{pmatrix} 0 \\ 2 \\ 1 \end{pmatrix} = 2 A e_2 + A e_3 = - \dfrac 1 3 \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} + A e_3$, whence $A e_3 = \dfrac 1 3 \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} = \dfrac 1 3 e_1 + \dfrac 1 3 e_3$. Taking $x=1$ and $z=0$ gives $0 = A \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} = A e_1 + A e_2$, whence $A e_1 = \dfrac 1 6 \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} = \dfrac 1 6 e_1 + \dfrac 1 6 e_3$. It follows that $A = \begin{pmatrix} \dfrac 1 6 &amp; - \dfrac 1 6 &amp; \dfrac 1 3 \\ 0 &amp; 0 &amp; 0 \\ \dfrac 1 6 &amp; - \dfrac 1 6 &amp; \dfrac 1 3 \end{pmatrix} = \dfrac 1 6 \begin{pmatrix} 1 &amp; -1 &amp; 2 \\ 0 &amp; 0 &amp; 0 \\ 1 &amp; -1 &amp; 2 \end{pmatrix}$.
Multivariate series inversion
Recall that a univarite formal power series can be inverted \begin{eqnarray*} f(x)&amp;=&amp;x+a_1 x^2+ a_2 x^3 +\cdots \\ f^{-1}(x) &amp;=&amp; x-a_1 x^2 +(2a_1^2 -a_2)x^3+ \cdots \end{eqnarray*} A $2$ dimensional analogue can be done in your case, let $a_1=1$ and $b_1=1$ for the sake of simplicity and this can be achieved by rescaling $u$ and $v$. Your function is $ T:(v,u) \rightarrow (X=F(u,v),Y=G(u,v))$ \begin{eqnarray*} X&amp;=&amp;v+a_2 u^2 v + a_3 u^2 v^3 \\ Y&amp;=&amp;u +b_2 uv^2+b_3u^3v +b_4 u^2 v^3. \end{eqnarray*} The inverse function $ T:(X,Y) \rightarrow (v=V(X,Y),u=U(X,Y))$ is (up to third order) \begin{eqnarray*} v=X-a_2 X Y^2 +\cdots \\ u= Y-b_2 X^2 Y +\cdots \end{eqnarray*} Now add fourth order terms ( $\alpha_u X^4+ \beta_u X^3 Y + \gamma_u X^2 Y^2 +\delta_u X Y^3 +\epsilon_u Y^4$) to these series and use $ T \circ S (X,Y)=(X,Y)$ to calculate the next terms.
If $ 2<x^2<3$ then find then no of solutions which satisfy that $({x^2})=1/(x)$
Unless I am mistaken, both $(x^2)$ and $(x)$ are not greater than $1$, so there cannot be a solution unless both are in fact equal to $1$. But that cannot be, since there is no integer between $2$ and $3$.
square roots of 1, congruences
We want $2^s$ to divide $(x-1)(x+1)$. Note that $x-1$ and $x+1$ are even. Since they differ by $2$, one of them is divisible by $2$ but by no higher power of $2$, and the other is divisible by $2^{s-1}$. Let us take the case $x-1$ divisible by $2$ but by no higher power of $2$, and you can take care of the other case. So $x+1$ has to be divisible by $2^{s-1}$. Thus $x=-1+k\cdot 2^{s-1}$ for some integer $k$. If $k$ is odd, that gives the solution $x\equiv -1+2^{s-1}\pmod{2^s}$. If $k$ is even that gives the solution $x\equiv -1\pmod{2^s}$. The reason that we need $s\ge 3$ is that if $s\le 2$ the "four" roots we have found will not be distinct modulo $2^s$. Remark: The difference between $2$ and an odd prime $p$ is that $2$ can divide both $x-1$ and $x+1$, while an odd prime cannot.
Characterization of cosine of rational multiples of $\pi$
Only a tautologous criterion. You are asking whether $x$ is the real part of a root of unity, in other words whether $x$ is of the form $\frac12(\zeta+1/\zeta)$ for a root of unity $\zeta$. Not very useful, I think.
What is the "circle-plus" symbol I see in Abstract Algebra?
It's most likely the direct sum symbol. The direct sum $A\oplus B$ of two abelian groups/rings/modules $A$ and $B$ is the cartesian product $A\times B$ as sets with the obvious relation $(a,b)\ast_{A\oplus B}(c,d)=(a\ast_A c,b\ast_B d)$ where $\ast$ is the addition in the respective object (multiplication is similarly defined if you're working in a ring/module).
Solving recurrence relation $a_{n+1} = \frac{3a_n^2}{a_{n-1}}$
As you suggested, applying $\log$ on both sides we obtain $$\log a_{n+1}=2\log a_n-\log a_{n-1}+\log 3.$$
$\ell \in (\ell^{\infty})^{*}$ can be uniquely represent by the two functionals
Let $t_n=\ell(e_n)$. Note that if $s\in\ell^\infty$ then $$\left|\sum_{n=1}^Ns_nt_n\right|\le||\ell||\,||s||.$$So $\sum|t_n|\le||\ell||&lt;\infty$, so we can define $\ell_1\in(\ell^\infty)^*$ by$$\ell_1(s)=\sum s_nt_n.$$ If $s\in c_0$ then the sum $s=\sum s_ne_n$ converges in $\ell^\infty$, so $\ell s=\ell_1s.$ Hence if $\ell_2=\ell-\ell_1$ then $ls=0$ for every $x\in c_0$. For uniqueness: If $\sum s_nt_n=\sum x_nt_n'$ for every $s\in c_0$ then $s=e_n$ shows that $t_n=t_n'$. Or, less elementary but maybe more obvious: Say $K$ is the maximal ideal space of the Banach algebra $\ell^\infty$. Then $\ell^\infty\approx C(K)$ and $\Bbb N\subset K$ (or more properly, there is a canonical embedding of $\Bbb N$ in $K$). So an element of $(\ell^\infty)^*$ "is" a measure $\mu$ on $K$; now $\ell_1$ is just the restriction of $\mu$ to $\Bbb N\subset K$.
Convexity Proof for $\mathbb R ^n \backslash A$
The straight line divides the plane into two complementary convex sets. In general, the same holds for a hyperplane.
Is the square of a measure a measure?
Suppose $\nu$ is a (finite, non-zero) measure and consider some $A\in \Gamma$. Note that $$\begin{align} [\mu(A)+\mu(A^c)]^2 &amp;=\mu(A\cup A^c)^2\\ &amp;= \nu(A\cup A^c) \\ &amp;= \nu(A)+\nu(A^c)\\ &amp;= \mu(A)^2+\mu(A^c)^2 \end{align}$$ Hence $\mu(A)\mu(A^c)=0$ and we have $\mu(A)=0$ or $\mu(A^c)=0$. If $\mu(A)&gt;0$, we have $\mu(A^c)=0$ thus $\mu(A)=\mu(X)$. Hence $\mu$ takes values in the set $\{0,\mu(X)\}$. If $A$ and $B$ are such that $\mu(A)&gt;0$ and $\mu(B)&gt;0$, then $\mu(A\cap B)=\mu(A)=\mu(B)=\mu(X)$, so $\mu$ must be quite pathological...
Is function P infinitely additive (countably or uncountably additive)?
$$ 1=P(\mathbb N) = P\left( \bigcup_{n\in\mathbb N} \{n\} \right) \overset{\Large\text{?}} = \sum_{n\,\in\,\mathbb N} P(\{n\}) = \sum_{n\,\in\,\mathbb N} 0 = 0. \\ \phantom{\frac11} $$ \begin{align} 1 &amp; = P(\mathbb N) = P( \text{set of all even members of } \mathbb N \cup \text{set of all odd members of } \mathbb N) \\[10pt] &amp; \overset{\Large\text{?}} = P( \text{set of all even members of } \mathbb N) + P(\text{set of all odd members of } \mathbb N) = 1+1 \end{align}
Pre-orders induced by subcategories
If you like preorders and monomorphisms, you'll like subobjects. For any object $c$ of a category $C$, we can consider the category of all monomorphisms $b \to c$ into $c$, and this forms a preorder in which all of the morphisms themselves come from monomorphisms in $C$. When $c$ is an object in a familiar category this usually ends up being equivalent to the poset of subobjects of $c$ in the usual sense (e.g. when $c$ is a set or a group).
If $\tau_n,\tau$ are stopping times with $\tau=\inf_n\tau_n$, then $\text E[X\mid\mathcal F_{\tau_n}]\to\text E[X\mid\mathcal F_\tau]$
Suggested modification: &quot;My idea is to take $Y=\operatorname E\left[X\mid\mathcal F_\tau\right]$&quot;: Why don't you consider $Y=X$ in lemma 1? Rest of Proof: Then, as you point out, Consider, $\mathcal G_n:=\mathcal F_{\tau_n}$ for $n\in\mathbb N$ in Lemma 1. Then $(M_{\tau_n})$ is an $(\mathcal F_{\tau_n})$ martingale and $M_{\tau_n}=\operatorname E\left[X\mid\mathcal F_{\tau_n}\right]$ for all $n\in\mathbb N$. Now let us calculate $\mathcal G_{\sup I}$, given (3) : \begin{align*} \mathcal G_{\sup I}&amp;=\sigma(\mathcal G_t:t\in I) \end{align*} Since in our case $I$ is actually $\mathbb{N}$, $\mathcal G_{\sup I}=\sigma(\mathcal G_n:n\in \mathbb N) = \sigma(\mathcal F_{\tau_n}:n\in \mathbb N)$. But from (3) we have that: $\mathcal F_\tau=\bigcap_{n\in\mathbb N}\mathcal F_{\tau_n}$. So $G_{\sup I}=\mathcal F_\tau$ Now from the second part of Lemma 1, we get that: \begin{align*} M_t &amp;\xrightarrow{t\to\operatorname{sup}I} &amp;M_{\operatorname{sup}I}\;\;\;&amp;\text{almost surely}\\ \operatorname E\left[Y\mid\mathcal G_t\right] &amp;\xrightarrow{t\to\operatorname{sup}I}&amp;\operatorname E\left[Y\mid\mathcal G_{\sup I}\right]\;&amp; \text{almost surely}\\ \end{align*} We now substitute in the appropriate values: \begin{align*} \operatorname E\left[X\mid\mathcal F_{\tau_n}\right] &amp;\xrightarrow{n\to\operatorname{sup}\mathbb{N}} \operatorname E\left[X\mid \mathcal F_\tau \right]\;&amp; \text{almost surely}\\ \operatorname E\left[X\mid\mathcal F_{\tau_n}\right] &amp;\xrightarrow{n\to\infty} \operatorname E\left[X\mid \mathcal F_\tau \right]\;&amp;\text{almost surely}\\ \end{align*} which is the desired result
Derivative of Rayleigh quotient
Since $t$ and $w$ are fixed, we can put $f(t):=R(z+tw)$ and as @Yemon Choi suggests, after expanding the inner products: $$f(t)=\frac{||w||^2t^2+2t\Re (Aw,z)+(Az,z)}{||w||^2t^2+2t\Re (z,w)+||z||^2}.$$ We have, using the classical rules of differentiation: $$f&#39;(t)=\frac{2t||w||+2\Re (Aw,z)}{||w||^2t^2+2t\Re (z,w)+||z||^2}-\frac{||w||^2t^2+2t\Re (Aw,z)+(Az,z)}{(||w||^2t^2+2t\Re (z,w)+||z||^2)^2}(2t||w||^2+2\Re (Aw,z)),$$ so evaluating it at $t=0$ we get $$\frac{2\Re (Aw,z)}{||z||^2}-2\frac{(Az,z)}{(||z||^2)^2}\Re (Aw,z)=0,$$ which is what is wanted.
Solve a (not difficult) differential equation
Hint: A differential equation on the form $$y'(x) + p(x)y(x) = q(x) $$ has the solution given by $$y(x) = e^{-P(x)}\int e^{P(x)}q(x)\,\mathrm{d}x,$$ where $P(x)$ is any antiderivative of $p(x)$, e.g. with integration constant $0$. In English this is called the integrating factor method.
Are finite faithful $G$-sets asymptotically free?
Yes, finite faithful $G$-sets are asymptotically free. To see this, let $g$ be the order of $G$, say that a point of a $G$-set is $G$-free (or just free if $G$ is obvious form the context) if its stabilizer in $G$ is trivial, and consider the condition: $(1)$ The proportion of free point in $P^k(X)$ tends to $1$ as $k$ tends to $\infty$. We claim that $(1)$ is equivalent to the condition in the question, which is $(2)$ The average number of points by orbit in $P^k(X)$ tends to $g$ as $k$ tends to $\infty$. To translate $(1)$ and $(2)$ into precise mathematical statements, let $n(k)$ be the number of points of $P^k(X)$, let $f(k)$ be the number of free orbits (so that the number of free points is $gf(k)$), and let $r(k)$ be the number of orbits. Then $(1)$ says that $gf(k)/n(k)$ tends to $1$ as $k$ tends to $\infty$, and $(2)$ says that $n(k)/r(k)$ tends to $g$ as $k$ tends to $\infty$. We'll prove $(3)$ Conditions $(1)$ and $(2)$ are equivalent. $(4)$ Condition $(2)$ holds when $g$ is a prime number $p$. Let's prove that all finite faithful $G$-sets satisfy $(1)$ and $(2)$ taking $(3)$ and $(4)$ for granted. (The references to $(3)$ will be implicit.) Proof. We can assume $G\ne\{1\}$. Let $\mathcal H'$ be the set of all subgroups $H$ of $G$ such that $H\ne\{1\}$, and let $\mathcal H$ be the set of all minimal elements of $\mathcal H'$. In particular $\mathcal H$ is nonempty and any $H\in\mathcal H$ has prime order. Let $\xi\in P^k(X)$. Then $\xi$ is $G$-free if and only if it is $H$-free for all $H\in\mathcal H$. For each subgroup $K$ of $G$ let $F(K,k)$ be the set of $K$-free points in $P^k(X)$. Then $F(G,k)$ is the intersection of the finitely many sets $F(H,k)$ with $H\in\mathcal H$. As the proportion of points of $P^k(X)$ which are in $F(H,k)$ with $H\in\mathcal H$ tends to $1$ by $(4)$, this property also holds for $F(G,k)$, that is, $(1)$ holds. $\square$ We're left with proving $(3)$ and $(4)$. Proof of $(3)$. (Recall that $(3)$ says that $(1)$ and $(2)$ are equivalent.) We have, in the notation introduced just before the statement of $(3)$, $$ f(k)+\frac{n(k)-gf(k)}{g/2}\le r(k)\le f(k)+ n(k)-gf(k), $$ that is $$ \frac2g-\frac{f(k)}{n(k)}\le\frac{r(k)}{n(k)}\le1-(g-1)\ \frac{f(k)}{n(k)}\ . $$ Thus, if $f(k)/n(k)$ tends to $1/g$, so does $r(k)/n(k)$. Assume conversely that $r(k)/n(k)$ tends to $1/g$, and let $\varepsilon$ be positive. For $k$ large enough we have $$ \frac2g-\frac{f(k)}{n(k)}&lt;\frac1g+\varepsilon, $$ that is $$ 0\le\frac1g-\frac{f(k)}{n(k)}&lt;\varepsilon. $$ This shows that $f(k)/n(k)$ tends to $1/g$. $\square$ Proof of $(4)$. (Recall that $(4)$ says that $(2)$ holds when $g$ is a prime number $p$.) Writing $m(k)$ for the number of fixed points of $G$ in $P^k(X)$, we get $$ n(k)=pf(k)+m(k),\quad r(k)=f(k)+m(k), $$ and we want to show that $r(k)/n(k)$ tends to $1/p$. As we have $$ \frac{pr(k)}{n(k)}=1+(p-1)\ \frac{m(k)}{n(k)}\ , $$ it suffices to verify that $m(k)/n(k)$ tends to $0$. Note that $$ \frac{m(k+1)}{n(k+1)}=\frac{2^{r(k)}}{2^{n(k)}}=2^{r(k)-n(k)}, $$ and we only need to verify that $$ \frac{p}{p-1}\ \Big(n(k+1)-r(k+1)\Big)=n(k+1)-m(k+1) $$ $$ =2^{n(k)}-2^{r(k)}=2^{r(k)}\left(2^{n(k)-r(k)}-1\right) $$ tends to $\infty$. But we have $n(k)-r(k)\ge1$ by faithfulness and the inequality $pr(k)\ge n(k)$ implies that $r(k)$ tends to $\infty$. $\square$
Showing $1-\Re \varphi(2t)\leq 4(1-\Re \varphi(t)) $
Note that $\Re \varphi(t) = \Bbb E[\cos tX]$. For all $u\in\Bbb R$, we have $$\begin{eqnarray} 1-\cos 2u =2\sin^2 u&amp;=&amp;2-2\cos^2 u\le 4-4\cos u \end{eqnarray}$$ since $2(\cos u-1)^2=2\cos^2 u +2-4\cos u\ge 0$ holds for all $u$. Now we have $$\begin{eqnarray} 1-\Re \varphi(2t)=\Bbb E[1-\cos 2tX]&amp;\leq &amp;\Bbb E[4-4\cos tX]=4-4\Re \varphi(t) \end{eqnarray}$$ which follows from $1-\cos 2tX\le 4-4\cos tX$.
What can we say about the rate of growth of a function growing faster than all polynomials?
If $g$ is any function that grows faster than all polynomials, then $\sqrt{g}$ is another such function, which grows slower than $g$. So nothing like your statement could ever be true, even if you replaced $e^t$ with some other function. In general, growth rates of functions are extremely dense — given any two categories of growth rate where one is strictly larger than the other, you can just about always concoct a function whose growth is intermediate between them (except when your inability to do so is tautological; e.g., your categories are "polynomial functions" and "functions that grow faster than polynomials").
Obtaining $\sum_{n=1}^{\infty} a^n \cos{(n\theta)} = \frac{a \cos{\theta}-a^2}{1-2a\cos{\theta}+a^2}$
Consider that: $$\cos(n\theta)=\frac{\mathbb e^{n\theta i}+\mathbb e^{-n\theta i}}{2}$$ So: $$r^n\cos(n\theta)=\frac{r^n\mathbb e^{n\theta i}+r^n\mathbb e^{-n\theta i}}{2}$$ When: $|r|\lt1$, both of the following series are convergence:$$\sum_{n=1}^\infty r^n\mathbb e^{n\theta i}\,\,\,,\,\,\,\sum_{n=1}^\infty r^n\mathbb e^{-n\theta i}$$ Thus: $$ \begin{align} \sum_{n=1}^\infty r^n\cos(n\theta)&amp;=\frac12(\sum_{n=1}^\infty r^n\mathbb e^{n\theta i}+\sum_{n=1}^\infty r^n\mathbb e^{-n\theta i})\\ &amp;=\frac12(\frac{r\mathbb e^{i\theta}}{1-r\mathbb e^{i\theta}})+\frac12(\frac{r\mathbb e^{-i\theta}}{1-r\mathbb e^{-i\theta}}) \end{align} $$