title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Example of a finite solvable group with non-solvable automorphism group | Take a finite abelian group $G=Z^n_p$, $p$ is prime. Then $Aut(G)$ equals $GL(n, Z_p)$ which is not solvable with few exceptions. Here $Z_p$ is the cyclic group of order $p$. |
Proof by induction that $a_0 = 1, a_1 = 1, a_n=2a_{n-1} + 3a_{n-2}$ satisfies $a_n = \frac12 (3^n) + \frac12 (-1)^n$ | We have, by induction hypotheses, that :
$a_k=\frac 12(3^k)+\frac 12(-1)^k$
and :
$a_{k-1}=\frac 12(3^{k-1})+\frac 12(-1)^{k-1}$.
If we "plug them" into :
$a_{k+1}=2a_k+3a_{k-1}$
we get :
$$a_{k+1}=2\left[\frac 12(3^k)+ \frac 12(-1)^k\right]+3\left[\frac 12(3^{k-1})+ \frac 12(-1)^{k-1}\right] =$$
$$= 3^k+(-1)^k+3 \frac 12 3^{k-1}+ \frac 32(-1)^{k-1} =$$
$$=3^k+ \frac 12 3^k+(-1)(-1)^{k-1}+ \frac 32(-1)^{k-1}=\frac 32 3^k+ \frac 12 (-1)^{k-1}=$$
$$=\frac 12 3^{k+1}+ \frac 12 (-1)^{k-1}=\frac 12 3^{k+1}+ \frac 12 (-1)^{k-1}(-1)^2=$$
$$=\frac 12 3^{k+1}+ \frac 12 (-1)^{k+1}$$ |
How do you find the probability of a certain state in Markov Chain? | One can forget Server 2. The length of the queue at Server 1 increases by $1$ at rate $a$ and, when positive, it decreases by one at rate $u_1$. This is the most classical birth-and-death chain, whose stationary distribution exists if and only if $a\lt u_1$ and can be readily found, either reading your textbook or solving the associated system of linear equations. |
An statement about non-negative trace and positive semi-definite operators. | The "if" direction does hold:
Let $\rho,\sigma$ be positive-semidefinite (matrices, linear operators). Denote by $\rho^{1/2}$
the positive-semidefinite square root of $\rho$ ("the", as it is uniquely determined). Then
$$\operatorname{Tr}(\rho\sigma) \;=\; \operatorname{Tr}(\rho^{1/2}\sigma\rho^{1/2})\;\geqslant\; 0$$
by cyclicity of the trace, and because $\,\rho^{1/2}\sigma\rho^{1/2}\,$ is positive-semidefinite.
Regarding the other direction, the "only if":
$$\text{With}\quad\sigma = \begin{pmatrix}1&0\\0&1\end{pmatrix},\quad
\rho =\begin{pmatrix}2&0\\0&-1\end{pmatrix}
\text{ you get } \operatorname{Tr}(\rho\sigma)=1\,.$$ |
Transformations and coordinate Systems | For Part A, you need to find $(x,y)=(x(u,v), y(u,v))$. You might start by writing $u^{2}+v^{2}$ as a function of $x$ and $y$. That should make it easy for you to find $x(u,v)$.
For Part B, you have correctly derived the expression describing the transformation of lines of constant $x$. Think about polar coordinates, and you should see what shape that expression describes in the $(u,v)$-plane. Lines of constant $y$ transform to similarly simple curves in the $(u,v)$-plane.
EDIT TO ADD:
Okay, so you've pretty much gotten through it, I think, with the help of some comments, but here's the remainder of Part A for completeness:
$$u^{2}+v^{2} = e^{2x}\cos^{2}y + e^{2x}\sin^{2}y = e^{2x},$$
and, therefore,
$$x = {1\over2}\log(u^2+v^2).$$
And you correctly derived that
$$y=\tan^{-1}{v\over u}.$$
Now note that this is, indeed, only a local inversion, because the arctangent is not uniquely defined (you have to specify the particular range of values, for example, $-\pi/2\lt y \lt\pi/2$).
Do you now have a better idea of the curves on the $(u,v)$-plane described by lines of constant $x$ and $y$? |
Find the integers that double if the first and last digits are swapped | I'll give an algebraic proof. Your answer is good, though.
Given a number $l$, greater than $10$ since this is obviously not true for single digit numbers, write it as $l = 10^nx + 10y + z$, where $x$ and $z$ are single digit numbers i.e. $10^n \leq l < 10^{n+1}$. Essentially, we are isolating the first and last digits separately (for example, $166465 = 1 \times 10^5 + 6646 \times 10 + 5$)
Now, from the given condition, $2l = 10y + 10^n z + x$, since we have only swapped the last two digits. Of course, by ordinary multiplication, this is also equal to $2 \times 10^n x + 20y + 2z$, by multiplying the expression for $l$ by $2$.
Consequently, $10y + 10^n z + x = 2\times 10^n x + 20y + 2z$. A few transpositions give $10y = (10^n - 2) z - (2 \times 10^n-1)x$.
The last digit of the left hand side is $0$. Therefore, so is the last digit of the number on the right hand side. Note that $n \geq 1$, since we know $l$ has two digits. This gives the last digit of the right hand side as the last digit of $x - 2z$.
Consequently, $x - 2z$ must have last digit zero. That is, either $x = 2z$, or $x = 2z - 10$ (note that $-20 < x - 2z < 10$).
But $y \geq 0$, so $z \geq \frac{(2 \times 10^n - 1) x}{10^n - 2} \geq 2x \geq x$. So $x = 2z - 10$ is the only possibility : the other forces $x \geq z$.
But if $x = 2z - 10$ then $10 y = 10^n z - 2z - 2(10^n)(2z - 10) + 2z - 10 = - 3 \times 10 ^n z + 2 \times 10^{n+1} - 10$, and dividing by $10$ gives $y = 2 \times 10^n - 3 \times 10^{n-1}z - 1 = 10^{n-1}(20- 3z) - 1$.
This forces $z \leq 6$ from $y \geq 0$. Also, note that $2z - 10 \geq 0$ so $z \geq 5$.
Note that $z = 6$ is ruled out since this gives $x = 1$ but $x$ is the last digit of an even number, hence is even.
But if $z = 5 $ then $x = 0$, but $x \neq 0$ by assumption that $10^n \leq l$.
Thus, no such $l$ exists. |
Find normal closure of $\mathbb Q\left( (1+i)\sqrt[4]{5} \right)/\mathbb Q$ | As you guessed, $\bar{a}$ is not in $\mathbb{Q}(a)$. If this is true, then
$$
i = -\frac{\bar{a}}{a} \in \mathbb{Q}(a), \sqrt[4]{5} = \frac{a}{1+i}\in \mathbb{Q}(a)
$$
so $\mathbb{Q}(a)\supseteq \mathbb{Q}(i, \sqrt[4]{5}) = K$. However, the field $K$ has degree 8 over $\mathbb{Q}$, so this is impossible. |
Rearrange this expression for lambda and substitute it back into the expression | You are not reading the expressions correctly. There is no $r + \hat{\mathbf{s}}$ or $r - \hat{\mathbf{s}}$. In the original expression, the $\hat{\mathbf{s}}$ is multiplied by $\lambda$, then the result is added to $r$. So after the substitution, it becomes
$$ \mathbf{r}' = r + \left(\hat{\mathbf{s}} (- \mathbf{r}.\hat{\mathbf{e}}_3/s_3) \right)$$
I inserted extra parentheses to emphasize where the confusion may be. The above is then equal to
$$ \mathbf{r}' = r - \left(\hat{\mathbf{s}} (\mathbf{r}.\hat{\mathbf{e}}_3/s_3) \right)$$ |
Proof of $\det(AB)=\det(A) \det(B)$: confused about $(c\alpha_{i}+\alpha_{i})B$ | We have
$$\begin{align*}
D(\alpha_1,\ldots,c\alpha_i+\alpha'_i,\ldots,\alpha_n) &= \det(\alpha_1B,\ldots,(c\alpha_i+\alpha'_i)B,\ldots,\alpha_nB)\\
&= \det(\alpha_1B,\ldots,c(\alpha_iB) + \alpha'_iB,\ldots,\alpha_nB)\\
&= c\det(\alpha_1B,\ldots,\alpha_iB,\ldots,\alpha_nB) + \det(\alpha_1B,\ldots,\alpha'_iB,\ldots,\alpha_nB)\\
&= cD(\alpha_1,\ldots,\alpha_i,\ldots,\alpha_n) + D(\alpha_1,\ldots,\alpha'_i,\ldots,\alpha_n),
\end{align*}$$
which shows that $D$ is linear in the $i$th coordinate. Of course, the same computation works in each component, so $D$ is $n$-linear.
The first equality is by definition of $D$. The second is by the first observation (that $(c\alpha_i + \alpha'_i)B = c\alpha_iB + \alpha'_iB$); the third because $\det$ is $n$-linear, and the final equality by the definition of $D$. |
Convergence of filters in top. | If $X$ is a non-empty set then for every point $x \in X$ there is the fixed filter $\mathcal{F}_x=\{A \subseteq X\mid x \in A\}$. It's clear that this is even a maximal (by inclusion) filter, i.e. an ultrafilter.
If $(X,\tau)$ is the discrete space and $\mathcal{F}$ is a (proper) filter on $X$:
$$\forall x: (\mathcal{F} \to x \iff \mathcal{F}=\mathcal{F}_x)$$
Left to right: $\{x\}$ is a neighbourhood of $x$ in $(X,\tau)$ so if $\mathcal{F}$ converges to $x$, $\{x\}$ must be an element of $\mathcal{F}$. But then by the filter axioms all $A \in \mathcal{F}_x$, so all $A$ with $\{x\} \subseteq A$, are also in $\mathcal{F}$, hence $\mathcal{F}_x \subseteq \mathcal{F}$ and if $A \in \mathcal{F}$ it must intersect $\{x\}$ so that $x \in A$ and so $\mathcal{F} \subseteq \mathcal{F}_x$ too. Hence $\mathcal{F} = \mathcal{F}_x$.
Right to left is trivial, as $\mathcal{F}_x$ is exactly the neighbourhood filter of $x$ in $(X,\tau)$.
So precisely the fixed filters converge, and no others. |
Given two adjacency matrices, how can I find if they're isomorphic? | As isomorphims have to preserve degree, there are only 2 possible ones, the first given by
$$ \phi_1 \colon 1\mapsto 3, 2 \mapsto 4, 3 \mapsto 1, 4 \mapsto 2 $$
and
$$ \phi_2 \colon 1\mapsto 4, 2 \mapsto 3, 3 \mapsto 1, 4 \mapsto 2 $$
$\phi_1$ maps the first matrix to (that means the following matrix has $a_{\phi_1^{-1}(i), \phi_1^{-1}(j)}$ as $(i,j)$-th entry.
$$
\begin{pmatrix}
0&1&1&1\\
1&0&0&0\\
1&0&0&1\\
1&0&1&0
\end{pmatrix}
$$
and so $\phi_1$ is an isomorphism. |
long division with no numbers | You can separate it into cases on $B$ (it is easy to see that $B\not=0,1,5,9$), and find the values in the following order : $B\rightarrow A\rightarrow X\rightarrow Z,D,K\rightarrow P\rightarrow U,M,G$.
For $B=2$, we have $A=0,1$. But if $A=0$, then
$$(BU)=(2UMG)-(PZK)\ge 2000-999=1001$$
So, $A=1$. Also, if $X\le 8$, then
$$(2UMG)=2\times (XZD)+(12U)\le 2\times 899+129=1927$$
So, $X=9$.
Here, let $[n]$ be the right-most two digits of $n$. Then, since we have
$$[2\times (ZD)]=[ZK]$$
we have $Z=0$. But $(2UMG)=2\times (90D)+(12U)\le 2\times 909+129=1947$.
Similarly, you can do for $B=3,4,6,7,8$. In the following, I'll write the outlines.
For $B=3$, we have $A=2,X=9$. Now $Z=4,5$.
Case 1 : If $Z=4$, then $D=7,K=1,P=8,U=0,M=7\quad\Rightarrow\quad D=M$.
Case 2 : If $Z=5$, then $D=0,P=8,K=0\quad\Rightarrow\quad D=K$.
For $B=4$, we have $A=3,X=9$. Now $Z=0,6$.
Case 1 : If $Z=0$, then $D=2,P=6,K=8$. So, $$(4UMG)=4\times 902+(34U)\le 4\times 902+349=3957.$$
Case 2 : If $Z=6$, then $D=5,P=8,K=0,U=2,M=0\quad \Rightarrow\quad K=M$.
For $B=6$, we have $A=5,X=9$. Now $(Z,D,K)=(2k,1,6),(2k-1,9,4)\quad\Rightarrow\quad B=K\quad\text{or}\quad X=D$.
For $B=7$, we have $A=6, X=9$. Now $(Z,D,K)=(3,4,8),(8,3,1)$.
Case 1 : If $(Z,D,K)=(3,4,8)$, then $P=5,U=2,M=1,G=0$. This is sufficient.
Case 2 : If $(Z,D,K)=(8,3,1)$, then $P=8\quad\Rightarrow \quad Z=P$.
For $B=8$, we have $A=7,X=9, Z=1,D=4,K=2,P=3,M=9\quad\Rightarrow\quad M=X$.
Hence, the answer is
$$A=6,\quad B=7,\quad D=4,\quad G=0,\quad K=8,$$$$\quad M=1,\quad P=5,\quad U=2,\quad X=9,\quad Z=3$$ |
The Reals, Complex Numbers and Quaternions | these are the (finite dimensional) associative division algebras over $\mathbb R$. If you drop associative, you can include the octonians and if you drop the alternative requirement, we have the sedonians. |
On the solvability of the Affine Group | If $n>1$, this group is not solvable since it contains $Sl(n,\mathbb{R})=\{ A\in Gl(n,\mathbb{R}), det(A)=1\}$ which is semi-simple. |
Eccentricity of an ellipse.with b > a | The formula for eccentricity is always (distance between foci)/(length of major axis). |
$f$ is differenciable at a point a iff $f(x)=f(a)+L(x-a)+s(x)(x-a)$, with $L=f'(a)$ and $s(x) \to 0$ when $x \to a$ | If $f$ is differentiable at $x=a$ with the existence of the derivative $f'(a)$, now let $s(a)=0$ and $s(x)=\dfrac{f(x)-f(a)}{x-a}-f'(a)$ for $x\ne a$, then $s(x)(x-a)=f(x)-f(a)-f'(a)(x-a)=f(x)-f(a)-L(x-a)$ and we have $\lim_{x\rightarrow a}s(x)=\lim_{x\rightarrow a}(f(x)-f(a)-L(x-a))=f(a)-f(a)-L(a-a)=0$. |
induction proof of recursive multiplication | $\def\mul{{\sf mul}}$We do induction on $n$. For $n = 0$, we have
$$ \mul(a,0) = 0 = a \cdot 0 $$
for any $a \in \mathbb N$.
Now let $n \ge 1$, suppose $\mul(b,k) = b \cdot k$ holds for any $b \in \mathbb N$ and any $k < n$. Let $a \in \mathbb N$ be arbitrary. Then if $n$ is even
$$ \mul(a,n) = \mul(2a, n/2) $$
Now as $n/2 < n$, we have by induction hypothesis
$$ \mul(a,n) = \mul(2a, n/2) = 2a \cdot \frac n2 =a \cdot n. $$
If $n$ is odd, we have again by recursion and induction hypothesis
$$ \mul(a,n) = \mul\left(2a,\frac{n-1}2\right) + a
= 2a \cdot \frac{n-1}2 + a = a(n-1) + a = a\cdot n. $$
This proves $\mul(a,n) = an$. |
Finding jordan basis for a matrix | By Jordan theorem you know that the Jordan normal form is:
$$J=\begin{pmatrix}0&1&0&0\\0&0&0&0\\0&0&0&1\\0&0&0&0\end{pmatrix}$$
https://en.wikipedia.org/wiki/Jordan_normal_form
Indeed note that since $A^2=0$,the number of Jordan blocks of size 2 is;
$$2\cdot \dim(\ker(A^2)-\dim(\ker(A^3)-\dim(\ker(A))=8-4-2=2$$
By Jordan theorem we know that a matrix $P$ exists such that $$P^{-1}AP=J$$
let $$P=[v_1,v_2,v_3,v_4]$$
then P has to satisfy the following system: $$AP=PJ$$ that is in your case $$Av_1=0$$ $$Av_2=v_1$$ $$Av_3=0$$ $$Av_4=v_3$$ you have already found $v_1$ and $v_3$. |
If $p^2=a^2\cos^2\theta +b^2\sin^2\theta$ ,then show that $p+\frac{d^2p}{d\theta ^2}=\frac{a^2b^2}{p^3}$ | You want to prove $p^2+pp''=\frac{(ab)^2}{p^2}$. Since $p^2=\frac{a^2+b^2}{2}+\frac{a^2-b^2}{2}\cos 2\theta$, $pp'=\frac{b^2-a^2}{2}\sin 2\theta$ and $pp''+p'^2=(b^2-a^2)\cos 2\theta$. Hence $$p^2+pp''=p^2-p'^2+(b^2-a^2)\cos 2\theta=p^2-\frac{(a^2-b^2)^2}{4}\frac{1-\cos^22\theta}{p^2}+(b^2-a^2)\cos 2\theta.$$The rest follows from writing $\cos 2\theta$ in terms of $p^2$. |
How do I see that $\{c \in I : 0 \le c \le b, y(c)=1\}$ has a minimum? | Let's call this set $K$. By the intermediate value theorem there exists some $c \in [0,b]$ such that $y(c)=1$, so $K$ is non-empty. Moreover $K\subset [0,b]$, so it is bounded. Call $m= \inf K$.
By definition of $\inf$ there exists some sequence $\{ x_n \} \subset K$ converging to $m$. Since the sequence is inside $K$, you have $y(x_n)=1$ for all $n$.
Finally, since $y$ is a continuous function, you have $y(x_n) \to y(m)$, so that $y(m)=1$. This means that $m \in K$, hence it is a minimum. |
Use the triangle inequality to prove that $|x - 2z| < 11$ | Uh... just do the triangle inequality.
$|x-2z| \le |x-y| + |y-2z| \le 3+8 = 11$ |
Solving Electric and Magnetic Fields for Charged Particle Path | One interesting case when a charged particles is at rest initially in the crossed electric and magnetic fields, it moves in a cycloid.
See the video here.
See also a detailed discussion in Chapter 2 of Fundamentals of Plasma Physics by Bittencourt, J. A. here |
Compensated Compactness And Conservation laws | I used this method a long time ago so I maybe make some mistake here. As I recall the compensated compactness is the functional analytic method that is mainly used for the solving scalar and 2x2 systems of conservation laws (more detail are in the books given bellow).
These days I use the vanishing viscosity method a lot, so I would try to explain the role of compensated compactness in one of the problems I was working on: Let's say you have two systems of conservation laws:
The original system:
$$u_t + f(u)_x =0 \hspace{1cm} (1)$$
And the approximation system:
$$u_t^{\epsilon} + f(u^{\epsilon})_x =\epsilon u^{\epsilon}_{xx} \hspace{1cm} (2)$$
We would like to show that the solution of $(2)$ converges to the solution of $(1)$ i.e. $u^{\epsilon} \rightarrow u$ weakly, when $\epsilon \rightarrow 0$ (this would be the vanishing viscosity method). Establishing this convergence is not trivial.
This weak limit $u$ actually provides a distributional solution to the system $(1)$. The proof relies on a compensated compactness argument - based on the representation of the weak limit in terms of Young measures...
References in the literature that helped me long time ago with the method of compensated compactness are:
D. Serre, Systems of conservation laws 2, 2000 - Chapter 9
C. Dafermos, Hyperbolic conservation laws in continuum physics, $3^{rd}$ edition, 2010 - Chapter 16
R.J. DiPerna, Convergence of approximate solutions to conservation laws, 1983 - Section 3
L.C. Evans, Weak convergence methods for nonlinear partial differential equations, 1990 - Section 5 |
A locally connected metric space of first category | Consider $X=\mathbb{Q}\times\mathbb{R}\cup\mathbb{R}\times\mathbb{Q}\subset\mathbb{R}^2$. Note that $X$ is locally connected: you can travel between any two points in an open ball of $X$ by moving along horizontal and vertical lines. But $X$ is the union of the countably many lines $\{q\}\times\mathbb{R}$ and $\mathbb{R}\times\{q\}$ for $q\in\mathbb{R}$ which are closed and have empty interior, so $X$ is first category. |
I'm not getting something about this differential equation | You have done it correct, note that
$$\frac{\mathrm {d}p}{\mathrm dt} = .06\underbrace{p_0\mathrm e^{.06t}}_{= p} = .06p,$$
as desired. |
Show that the vector family $(\sin \ell x)$ with $\ell \in \mathbb R$ is linearly independent | Hint.
Suppose you have a finite sequence of such functions such that:
$$\sum_{i=1}^n \beta_i \sin \ell_i x\equiv 0$$
Then consider the Taylor expansion at order $1$. What relation does it yield?
Differentiate and use again the Taylor expansion. And again, and again.
Do you recognize a Vandermonde matrix somewhere? When does its determinant equal zero? |
How many times does the graph of $x = t^2 - t - 6$, $y = 2t, -5 < t < 5$ cross the $y$-axis? | The graph crossing the $y$-axis for some $t_0$ means that $t_0$ verifies $x(t_0)=t_0^2-t_0-6=0$ (in other worths, when the $x$ coordinate is $0$).
So we solve the quadratic equation:
$$t^2-t-6=0$$
$$t=\frac{1\pm\sqrt{25}}{2} \longrightarrow t=3, t=-2.$$
This two values for $t$ verify that $-5<t<$, so we conclude your graph crosses the $y$-axis at two diferent points.
We know this two points are $(0,6)$ and $(0,-4)$ because $y=2t$ by your graph's definition, so:
$$t=3 \longrightarrow y=6 \longrightarrow \text{point } (0,6)$$
$$t=-2 \longrightarrow y=-4 \longrightarrow \text{point } (0,-4)$$ |
Every choice of basis is equally natural? | The point being made is that $\mathbb K^m$ has many different subspaces of dimension $n$ and in general there is no natural choice of a basis for such a subspace.
What would you say is a natural basis for the plane given by $x+y+z=0$ in $\mathbb R^3$? |
continuous dependency estimate for viscosity solutions | Actually this problem is proved on Evans and Lions' paper, so it is not trivial. enter link description here |
formal proof challenge | Is the following a "formal proof"?
Let $a$, $b$, $c$ be boolean variables representing the truth values of $A$, $B$, $C$. Then by the second distributive law of Boolean algebra we have
$$(a\vee b)\wedge(a\vee c)=a\vee(b\wedge c)\ .$$
This shows that your "argument" not only proves the truth of the third line under the assumption of the first two, but that in fact the stuff above the \hline is logically equivalent to what's underneath. |
value of $\int_{-\infty}^{\infty}\arcsin\frac1{\cosh x}\,dx$ | Wolfy says it is 4 times the Catalan's constant.
One (not optimal) way to derive this is
$$\def\sech{\operatorname{sech}}
\begin{align*}
\int_{-\infty}^\infty\arcsin\sech x\,\mathrm{d}x&=2\int_0^\infty\arcsin\sech x\,\mathrm{d}x\\
&=2\int_0^1\frac{\arcsin u\,\mathrm{d}u}{u\sqrt{1-u^2}}\quad(u=\sech x)\\
&=2\int_0^{\pi/2}\frac{\theta\,\mathrm{d}\theta}{\sin\theta}\quad(u=\sin\theta)\\
&=2\int_0^1\frac{2\tan^{-1}t\,\frac{2\,\mathrm{d}t}{1+t^2}}{\frac{2t}{1+t^2}}\quad(t=\tan\tfrac12\theta)\\
&=4\int_0^1\frac{\tan^{-1}t}{t}\,\mathrm{d}t\\
&=4\int_0^1\sum_{n=0}^\infty\frac{(-1)^n}{2n+1}t^{2n}\,\mathrm{d}t\\
&=4\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^2}=4G\\
\end{align*}
$$ |
Why is an open interval needed in this definition? (definition of a limit of a function) | Here are my opinions (Not accepted facts from math society):
Definitions usually should be short and provide good intuitive for incoming definitions.
Open intervals guaranty that some points in two side of point $a$ exist. But if $J=[a,c]$ or $J=[d,a]$ then $\lim_{x\to a^S}f(x)$ will be one sided. So it needs to use some extra words explaining that $a$ shouldn't be at start or end of which isn't short any more.
Open intervals will have good relevance to their generalized things in topology (open set) which needs for definition of continuous functions. |
Block Pyramids in Minecraft - closed formula for total number of blocks | So we look for
$$ P_n := \sum_{k=1}^n (2k-1)^2 $$
for an $n$-layer pyramid. We have $(2k-1)^2 = 4k^2 - 4k + 1$, hence
\begin{align*}
P_n &= 4\sum_{k=1}^n k^2 - 4\sum_{k=1}^n k + n \\
&= 4\cdot \frac 16\cdot n(n+1)(2n+1) - 4 \cdot \frac 12 n \cdot (n+1) + n\\
&= \frac 23 \cdot (2n^3 + 3n^2 + n) - 2n² -2n + n\\
&= \frac 43 n^3 + \frac 23 n - n\\
&= \frac 13 n (4n^2-1)
\end{align*} |
About $−\vert x \vert\le x \le \vert x \vert$ in absolute values | If $x = 0$ then it's obvious that $-|x| \leq x \leq |x|$ because $$-|x| = x = |x| = 0.$$
If $x > 0$ then $x = |x|$ and
$$ -\underbrace{x}_{=|x|} < 0 < x =|x|.$$
Therefore $-|x| \leq x \leq |x|.$
If $x < 0$ then $x = -|x|$ and
$$ -|x| = x< 0 < -\underbrace{x}_{-|x|} = |x|.$$
Therefore $-|x| \leq x \leq |x|.$
Why does the distance $x$ has the possibility to be greater than $x$ itself, shouldn't it only be that $x=|x|$ in any case? why greater than?
If $x = y$ than it is also true that $x \leq y$. The latter statement is less precise than the former, sure, but it is a true statement.
At least one of these inequalities
$$-|x| \leq x \leq |x|$$
is actually an equality. Which one of them ? Well it depends on the nature of $x$ (positive, negative, zero). Therefore it's better to put inequalities because it is true for all $x$.
I've seen only two different approaches, one that it seems to me too trivial [...] Is that reasoning even valid?
Yes, it is. |
Contour integral of $\frac{x^{p-1}}{1+x}$ | This is an integral on which we can work with its principal value. So let's evaluate
$$\text{P.V.}\int_0^{+\infty}\frac{x^p}{x(1+x)}\ \text{d}x$$
In which $$0 < p < 1$$
The function
$$f(z) = \frac{z^p}{z(1+z)}$$ has nonzero pole at $z = -1$ and the denominator has a zero of order at most $1$ (exactly one) at the origin. With residues we find:
$$\text{res}(f(z), z = -1) = \lim_{z\to -1} (1+z)f(z) = \frac{(-1)^p}{-1} = \frac{(e^{i\pi})^p}{-1}$$
Important Theorem, necessary for the continuation
Let $P(x)$ and $Q(x)$ be polynomials of degree $m$ and $n$ respectively, where $n\geq m+2$. If $Q(x)\neq 0$ for $x > 0$, and $Q(x)$ has a zero of order at most $1$ at the origin, and you have
$$f(z) = \frac{z^{\alpha}P(z)}{Q(z)}$$
where $0 < \alpha < 1$, thence:
$$\text{P.V.}\int_0^{+\infty} \frac{x^{\alpha}P(x)}{Q(x)}\ \text{d}x = \frac{2\pi i}{1 - e^{2\pi i p}}\sum_j\text{Res}(f(z), z_j)$$
Where $z_j$ are nonzero poles of $\frac{P(z)}{Q(z)}$.
Applying now the theorem we have:
$$\text{P.V.}\int_0^{+\infty}\frac{x^p}{x(1+x)}\ \text{d}x = \frac{2\pi i}{1 - e^{2\pi i p}}\sum_j\text{Res}(f(z), z_j)$$
Namely
$$\frac{2\pi i}{1 - e^{2\pi i p}}\text{Res}(f(z), z = -1) = \frac{2\pi i}{1 - e^{2\pi i p}}\cdot\frac{e^{i\pi p}}{-1} = \frac{2\pi i}{e^{ip\pi} - e^{-ip\pi}}$$
Which becomes after simple algebra of exponentials
$$\frac{\pi}{\frac{e^{ip\pi} - e^{-ip\pi}}{2i}} = \frac{\pi}{\sin(p\pi)}$$
The contour
The image sucks, but it's the best I could find |
Why is epsilon not a rational number? | See Surreal number :
Consider the smallest positive number in $S_ω$:
$\varepsilon =\{S_{-}\cup S_{0}|S_{+}\}=\{0|1,{\tfrac {1}{2}},{\tfrac {1}{4}},{\tfrac {1}{8}},...\}=\{0|y\in S_{*}:y>0\}$.
This number is larger than zero but less than all positive dyadic fractions. It is therefore an infinitesimal number, often labeled $ε$.
Thus epsilon is, "by definition" less than (and so different from) all rational in the $(0,1)$ interval. |
Higher ramification groups | Some more-or-less random things that come to mind:
There is the formula for computing the different of a field extension in terms of the sizes of the higher ramification groups.
The higher ramification groups correspond to naturally arising groups of local units; namely, their image under the Artin map are precisely the higher powers of the local 1-units.
In fact, historically more basic than the previous point is that the first very careful proofs of Kronecker-Weber (i.e., before class field theory existed) by Hilbert heavily involved the use of the higher ramification groups.
They turn out to provide the correct fix to Euler factors of L-functions at "bad" places (where "bad" depends on your context.) This would require a rather long digression, so let me just mention the Hasse-Arf theorem and the Artin conductor. |
Find constants $A, B$ for cumulative density functions (probability) | For every distribution function $F$ it is true that
$\displaystyle\lim_{x\to +\infty}F(x)=1$ and
$\displaystyle\lim_{x\to -\infty}F(x)=0$
Now, in this case, since $X$ takes only positive values, the lower limit value of $0$ is already attained at $x=0$ (and possibly even before, but certainly for $x=0$). Now, substituting the given $F$ in the equations above yields $$1=\lim_{x\to \infty}F(x)=\lim_{x\to \infty}\frac{A+Bx}{9+8x}=\lim_{x\to \infty}\frac{\frac{A}{x}+B}{\frac{9}{x}+8}\frac{\not x}{\not x}=\frac{B}{8}$$ and $$0=F(0)=\frac{A+B\cdot0}{9+8\cdot 0}=\frac{A}{9}$$ Putting these together you obtain that $$\begin{cases}\frac{B}{8}=1\\\frac{A}{9}=0\end{cases} \implies \begin{cases}B=8\\[0.2cm]A=0\end{cases} \implies F(x)=\frac{8x}{9+8x}$$ for $x\ge 0$. Thus $$P(X>2)=1-P(X\le 2)=1-F(2)=\frac{8(2)}{9+8(2)}=\frac{16}{25}=0.6400$$ |
Need Help to Translate French Wiki Page on Dessin D'Enfants | Let $S$ be the sphere with points $P_1, P_2$, $P_3$ deleted. For a fixed basepoint $b\in S$, the topological fundamental group $\pi_1^{top} = \pi_1^{top}(S, b)$ is free on two elements. More precisely, the simple loops $x$ and $y$ around $P_1$ and $P_2$ are canonical generators of $\pi_1^{top}$, and the simple loop $z$ around $P_3$ has $xyz = 1$. Recall that there exists a bijection between the set of isomorphism classes of finite covers of $S$ and conjugation classes of finite-index subgroups of $\pi_1^{top}$.
$\hskip1.7in$
With a dessin d'enfant given, it's now possible to define a right action by $\pi_1^{top}$ on its edges: $x$ (resp. $y$) sends an edge $e$ to the first edge obtained by turning anticlockwise from the black (resp. white) vertex of $e$. This action, described in Figure 3, is called the monodromy action;
$\hskip1.7in$
it's transitive because the underlying graph of the dessin d'enfant is assumed to be connected. In particular, the stabilizers of the edges are conjugate subgroups of $\pi_1^{top}$ of index equal to the degree of the dessin d'enfant. We thus associate to a dessin d'enfant a conjugation class of subgroups of $\pi_1^{top}$. Conversely, given such a conjugation class $C$, it's possible to associate a dessin d'enfant to it: its edges are by definition the elements of the group $\Gamma = \pi_1^{top}/H$ of right cosets of a representative $H$ of $C$. This right action by an element of $\pi_1^{top}$ defines an action on $\Gamma$, and two edges have a black (resp. white) vertex in common iff they belong to the same orbit under the action of $\pi_1^{top}$ generated by $x$ (resp. $y$). It's easily verified that these two constructions are inverse of each other. Thus:
Proposition/Construction: There exists a bijection between dessins d'enfants and isomorphism classes of finite topological covers of the sphere with three points deleted.
The following paragraph gives a more visual and intuitive description of this correspondence. |
Interesting subspace of $M_n(\mathbb{C})$ [CMI 2019] | Let $A,B\in W$ be two linearly independent matrix, then $A^{-1}B$ is invertible (not necessarily to be in $W$). Now $A^{-1}B$ has an eigenvector $v$, corresponding to eigenvalue $\lambda\in \mathbb{C}$ (the field $\mathbb{C}$ implies the existence of eigenvalue). We have $A^{-1}Bv = \lambda v$. This implies $(B-\lambda A)v = 0$ and so $B-\lambda A$ is not invertible. |
If $A$ has eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_n$, what are the eigenvalues of $(I - A)^{3}$? | If $Ax= \lambda x$ then
\begin{eqnarray*}
(I-A)x &=& (1-\lambda)x \\
(I-A)^3 x &=& (1-\lambda)^3 x. \\
\end{eqnarray*} |
Derivative product rule $\frac{d}{dx}\sqrt{xe^{-3x}}$ | If you want to compute $\frac{dy}{dx}$, then you have that
$$
\frac{1}{2 \sqrt{x e^{-3x}}} [e^{-3x}-3xe^{-3x}]=\frac{e^{-3x} (1-3x)}{2 \sqrt{x e^{-3x}}}
$$ |
Formula for the least element on the spectrum | Let $m=\inf\;\{ \lambda : \lambda\in\sigma(A) \}$. Then, for every positive integer $n$, $E_{A}[m,m+1/n] \ne 0$. So there exists a unit vector $x_n\in\mathcal{D}(A)$ such that $E_{A}[m,m+1/n]x_n = x_n$, which gives
\begin{align}
0 & \le \langle (A-mI)x_n,x_n\rangle \\
& = \int_{m}^{m+1/n}(\lambda-m) d\langle E(\lambda)x_n,x_n\rangle \\
& \le \frac{1}{n}\langle E[m,m+1/n]x_n,x_n\rangle \\
& \le \frac{1}{n}\langle x_n,x_n\rangle = \frac{1}{n}.
\end{align}
Therefore, $\lim_n \langle A x_n,x_n\rangle = m$. |
Prove that $(||x_n||)$ is bounded | The line
$$|\varphi _{x_n}(f)| = |f(x_n)| \leqslant \|f\|\cdot \|x_n\| \Rightarrow \|\varphi _{x_n}\| \leqslant \|x_n\| $$
passes without invoking boundedness of $\{f(x_n)\}$. The inequalities hold for all $f\in X'$, in particular for those with unit norm.
It would help to specify how your variables are quantified. I think you are using the above inequalities for fixed $n$ and then also apply Hahn-Banach for fixed $n$, i.e for every $x_n$ we pick some $f_n$ such that so and so. |
Real valued function, $M$, $s_n \rightarrow s; |f(s_n)| \le M$ for $s$ in open $U$. (Problem from Gamelin and Greene) | New approach!
$x$ is called good for $M$ if there's a sequence $\{x_i\}$ tending to $x$ with all its $f$-values $\le M$.
Clearly every point $x$ is good for some $M$, for example for $f(x)$.
It follows that if we define $G_M = \{x \in R: x \text{ good for } M\}$, then the countable union $G_1\cup G_2 \cup G_3 \cdots$ is all of $\mathbb{R}$.
If only some $G_M$ were open, that would have settled the problem. But actually life isn't that easy. Actually each $G_M$ is closed. Let's prove this. Take $\{x_i\}$ a sequence of points in some $G_M$ tending to $x$. Use the good sequences tending to every $x_i$ and combine them to build a good sequence tending to $x$, therefore $x \in G_M$ (details omitted).
OK, so if $G_M$ is closed, then let's take its interior. If it's nonempty, that will be the open set we need.
But maybe all the closed sets $G_M$, for every $M$, have empty interiors. We will prove that can't be the case using the Baire category theorem.
If all $G_i$, $i=1,2,3,\dots$ have empty interiors, then their complements $U_i = R\setminus G_i$ are dense open sets. Since the union of $G_i$ is $\mathbb{R}$, the intersection of $U_i$'s is empty. But by the Baire category theorem this intersection is a dense (so certainly nonempty) set. Contradiction.
Therefore at least one $G_i$ will have a nonempty interior, and its interior will be the open set we need. |
Number of ways to write a number $n$ as a product of $3$ natural numbers | Given $n\in{\mathbb N}_{\geq1}$ the number $N_n$ of triples $(n_1,n_2,n_3)\in{\mathbb N}_{\geq1}^3$ satisfying $n_1\cdot n_2\cdot n_3=n$ is obtained as follows: Let
$$n=p_1^{m_1}\cdot p_2^{m_2}\cdot\ldots\cdot p_r^{m_r}$$
be the prime decomposition of $n$. Then
$$N_n=\prod_{k=1}^r{m_k+2\choose2}\ .$$
Unfortunately this is not what you want, since triples differing only by a permutation should be considered the same.
For the correct bookkeeping we first have to count separately the factorings of the form $n=n_1^2\, n_2$ and $n=n_1^3$. This will involve the divisibility of the $m_k$ by $2$ and $3$. Then each triple $(n_1,n_2,n_3)$ of our first counting will have to be weighed in by the factor ${1\over6}$, if it contains three different factors, by the factor ${1\over3}$ if it has two equal entries, and by the factor $1$ if it has three equal entries. The latter can only happen if all $m_k$ are divisible by $3$. |
Number of elements in group ring $R(G)$ in terms of $|R|$ and $|G|$ | Hint: as an $R$-module, $R(G)$ is isomorphic to $R^{\oplus |G|}$.
Edit: in simpler terms, $R(G)$ is isomorphic as an abelian group to $R\oplus R\oplus\cdots\oplus R$, where there are $|G|$ copies of $R$ in the direct sum. This in particular as a set is equal to $R\times R\times\cdots\times R$, the cartesian product of $|G|$ copies of $R$. |
Why is $a^{5} \equiv a\pmod 5$ for any positive integer? | OK, without using Fermat's Little Theorem (a far more general and elegant result), here's another easy workaround.
Any integer $a$ can be exactly one of $0, 1, 2, -2, -1 \pmod 5$.
Take the fifth powers of each of those and see them reduce back to the original residue in each case. |
How to integrate gamma function | \begin{align}
& \int_0^\infty x \frac{\Gamma(3\alpha) }{\Gamma(\alpha)\Gamma(2\alpha)} x^{(\alpha -1)} (1-x)^{2\alpha-1} \, dx \\
&= \frac{\Gamma(3\alpha) } {\Gamma(\alpha)\Gamma(2\alpha)} \int_0^\infty
x^{\alpha} (1-x)^{2\alpha-1} \, dx \\
&= \frac{\Gamma(3\alpha) } {\Gamma(\alpha)\Gamma(2\alpha)} \frac{\Gamma(\alpha+1) \Gamma(2\alpha) } {\Gamma(3\alpha+1)} \\
&= \frac{\alpha}{3\alpha} \\
&= \frac13
\end{align} |
Find length of line segment using complex numbers/roots of unity approach | From Gerry Myerson's comment, we note that the inequality is
$$ \Re(( x-\mathrm i y)^3) + \Im ((x-\mathrm i y)^3) > 0$$
or equivalently
$$ \Re ((x+\mathrm i y)^3 ) > \Im((x+\mathrm i y)^3).$$
writing $x+\mathrm i y = r e^{\mathrm i \theta}$, we need to know when
$$ r^3 cos(3\theta) > r^3 \sin(3\theta).$$
Let me skip the high school trigonometry work, this gives you 3 regions of angles
$$ \theta \in \left(\frac{-\pi}4, \frac{\pi}{12}\right) + \frac{2k\pi}3, \quad k = 0,1,2.$$
Draw a graph:
now its clear we just need to find these 2 intersection points. I'll leave that part to you. |
Is there a "greatest function" that converges? | Yes, $f(n)=\frac{1}{n(\ln{n})^2}$.
No. Assume $f \geq 0$ and $\sum_{n \geq 1}{f(n)} < \infty$.
Then there exists an increasing sequence $N_n$ and some constant $C > 0$ such that $\sum_{N_n+1}^{N_{n+1}}{f(k)} \leq C2^{-n}$.
Then set $g(n)=(p+1)f(n)$, where $N_p < n \leq N_{p+1}$.
Then $\sum_{N_n+1}^{N_{n+1}}{g(k)} \leq C(n+1)2^{-n}$, thus $\sum_{n \geq 1}{g(n)}$ is finite and $g(n) \gg f(n)$. |
Why are these vectors expressed as row vectors and not column vectors? When to write as row vectors or column vectors? | You should indeed get the same answer if you express the row vectors $w_1,w_2,w_3,w_4$ as columns in a matrix, as long as you set the problem up in the correct manner.
The solution, as it is given in the text that you quote, expresses the row vectors $w$ as rows in a matrix $A$ and then asks you to solve for the unknown column vector $V$ in the equation $AV=0$. You get a solution with basis column vectors $v_1,v_2,v_3$ as written, and the entire solution could also be expressed parametrically as
$$x_1 = -3r - 4s - 2t$$
$$x_2 = 1r + 0s + 0t$$
$$x_3 = 0r - 2s + 0t$$
$$\vdots$$
or more briefly as $x=r\, v_1 + s\, v_2 + t \,v_3$.
Now let's do it using columns. First, convert the $w$'s into column vectors by taking their transposes: $w_1^T, w_2^T, w_3^T, w_4^T$. Next, collect those column vectors into a matrix, which is simply the transpose of the original matrix, namely $A^T$. Next, convert the column vector $V$ into a row vector by taking its transpose: $V^T = (x_1, x_2, x_3, x_4, x_5, x_6)$. Finally, write out the matrix equation that you must solve for the $x$'s:
$$V^T A^T = 0
$$
Note the different order of $V$ and $A$. This equation is equivalent to
$$A V = 0
$$
because $(AV)^T = V^T A^T$. So you will get the same parametric solutions for the $x$'s, and the null space will have the same basis vectors, except that they have been transposed into the row vectors $v_1^T, v_2^T, v_3^T$.
Edit Here are some additional remarks addressing the comment.
Row reduction of $A$ corresponds to multiplying $A$ on the left by a square elementary matrix $M$: the effect of the row reduction is to replace $A$ by $MA$. This does not alter the "null space" which is the solution set of the equation $AV=0$, because $AV=0$ if and only if $MAV=0$ (since $M$ is invertible).
Similarly, column reduction of $A^T$ corresponds to multiplying $A^T$ on the right by a square elementary matrix $N$, and this does not alter the solution set of the equation $V^T A^T=0$ because $V^T A^T = 0$ if and only if $V^T A^T N = 0$ (again since $N$ is invertible).
On the other hand, column reduction on $A$ may indeed alter the solution set of $AV=0$, because $AV=0$ and $ANV=0$ need not have the same solution sets. Similarly, row reduction on $A^T$ does indeed alter the solution set of $V^T A^T = 0$, because $V^T A^T = 0$ and $V^T M A^T$ need not have the same solution set.
The common theme here is that matrix multiplication is not commutative; multiplying on the left and on the right have different effects; row operations and column operations have different effects. |
Roots of the equation $(1-4x)^4+32x^4=\frac{1}{27}$ | By curiosity, let us find the minimum of the LHS.
From the derivative, we solve
$$(1-4x)^3=8x^3$$ or $$1-4x=2x$$ so that there is a single local minimum at
$$x=\frac16$$ and $$f(x)=\frac1{81}+\frac2{81}\ !$$ |
A shorter way to prove the identity on vectors: $\vec{A}=(\vec{A}\cdot \vec{n})\vec{n}+(\vec{n}\times\vec{A})\times\vec{n}$ | The vector $(A\cdot n)\,n$ is essentially the projection of $A$ onto $n$, and subsequently it remains to be seen that $(n\times A)\times n$ provides us with the projection of $A$ onto the plane with unit normal $n$. Without loss of generality assume that $n$ is on the positive $z$ axis so the plane $n^\perp$ is the $xy$-plane (the idea is just easier to visualize this way). Note that $n\times A$ is located on $n^\perp$ and forms a right angle (in the counterclockwise direction!) to the projection $p$ of $A$ onto $n^\perp$, so by the right-hand rule $(n\times A)\times n$ will not only be another vector on $A$ but will form a right angle clockwise to $n\times A$, hence will point in the same direction as $p$. Now it only remains to be seen that the magnitude is the correct.
Since $n\times A\perp n$, we have $\|(n\times A)\times n\|=\|n\times A\|\cdot\|n\|\sin\frac{\pi}{2}=\|A\|\sin\theta$, where $\theta$ is the angle $A$ makes with $n$. But $\|A\|\sin\theta$ is precisely the length of $A$'s projection $p$ onto the $xy$-axis! Q.E.D. |
Proving this sequence is bounded from above | Note that $a_n < \frac{1}{n} + \frac{1}{n} + \frac{1}{n} + \cdots + \frac{1}{n}$ ($n$ times). Then $a_n < \frac{n}{n} <1$. |
What is the measure of the set of numbers in $[0,1]$ whose decimal expansions do not contain $5$? | See, study the complement of the set. That is, look at integers which contain $5$ in their decimal expansion. Caveat : note that $0.6 = 0.5\overline{9}$ also counts as a decimal which is expressed with a $5$ as one of the digits, so belongs in the set. In particular, any terminating decimal which terminates with $6$ can be considered to belong to the set.
The first such category we can think of is: Those that have $5$ as the first digit following the decimal point. This is the set of numbers $[0.5,0.6]$. This has measure $0.1$, so the left over measure is $0.9$.
Now, from the remaining set, remove the set of all numbers with second digit $5$. This consists of $[0.05,0.06[, [0.15,0.16] \ldots [0.95,0.96]$ without $[0.55,0.56]$, since that was already removed earlier. Now, each of these has measure $0.01$, so we have removed $0.09$ more from the system. Hence, the left over is $0.81$.
By induction, prove the following : at the $n$th step, the set left over has measure $\frac{9^n}{10^n}$. Now, as $n \to \infty$, we see that the given set has measure zero (I leave you to rigorously show this, you can use the Borel-Cantelli lemma). This also incorporates the fact that the given set is measurable, since it's measure is computable(and is $0$). |
Geometric sequence, where n is increased by 2 each time | As you can see (incl. proof) here, we have
$$\sum_{k=1}^{\infty} \frac{1}{k^2} = \frac{\pi^2}{6}$$
and
\begin{align*}
1 + \frac{1}{3^2} + \frac{1}{5^2}+ \ldots & = \sum_{k=1}^{\infty} \frac{1}{(2k-1)^2} = \sum_{k=1}^{\infty} \frac{1}{k^2} - \sum_{k=1}^{\infty} \frac{1}{(2k)^2} \\
& = \frac{\pi^2}{6} - \frac{1}{4} \sum_{k=1}^{\infty} \frac{1}{k^2} = \frac{\pi^2}{6} - \frac{\pi^2}{24} = \frac{\pi^2}{8}
\end{align*} |
Order of the smallest group containing all groups of order $n$ as subgroups. | This is not known. In fact we don't even know how many groups of order $n$ exist for most $n$. A related open question asks what the smallest integer $s$ such that all groups of order $p^r$ embed in some group of order $p^s$.
However, I have reason to doubt that the number will be $m!$ for $m=p^e$. (This is not a proof, by the way, only a heuristic argument.) As in my link the growth rate of $p$-group isomorphism classes is $$p^{\left(\frac{2}{27}+O(n^{-1/3})\right)n^3}.$$ Multiply this by $p^n$ to count the number of elements in each subgroup (which assumes generously they will be embedded without overlap, including the identity) and we get $$p^{\left(\frac{2}{27}+O(n^{-1/3})\right)n^3+n}.$$ Comparing this to Stirling's approximation $$p^n!\sim \sqrt{2\pi p^n}\left(\frac{p^n}{e}\right)^{p^n}$$ you can see that the $p^n!$ will quickly become much too large to be the smallest group containing these isomorphism classes even if we were to insist that the groups were embedded completely separately.
In fact, we can use Stirling's approximation along with the upper bound in Gallagher (1967) to show that the number will almost never be $m!$ for large $m$. $$\lim_{m\to\infty}\frac{m^{cm^{2/3}\operatorname{log}(m)+1}}{\sqrt{2\pi m}\left(\frac{m}{e}\right)^m}=\lim_{m\to\infty}\frac{e^m m^{\frac{1}{2}-m+cm^{2/3} \operatorname{log}(m)}}{\sqrt{2 \pi }}= 0$$
So, I suspect that in general the number will be much lower than $m!$. |
Partial derivatives with transformation matrix | Since the partial derivatives are continuous at $(1,1)$, $f$ is differentiable there.Here's a proof of the fact that $f'(1,1)(x,y)=3x+y$ from the definition. Let $\varphi(x,y)=3x+y$. Note that\begin{align}f(x,y)-f(1,1)-\varphi\bigl((x,y)-(1,1)\bigr)&=2xy+\frac xy-3x-y+1\\&=\left(2-\frac1y\right)(x-1)(y-1)+\frac{(y-1)^2}y.\end{align}Now, take $\varepsilon>0$. If $\bigl\lVert(x,y)\bigr\rVert<\frac14$, then $\lvert y-1\rvert<\frac14$ and so $\lvert y\rvert>\frac34$ and $\left\lvert2-\frac1y\right\rvert<\frac23$. So\begin{align}\left\lvert f(x,y)-f(1,1)-\varphi\bigl((x,y)-(1,1)\bigr)\right\rvert&=\left\lvert\left(2-\frac1y\right)(x-1)(y-1)+\frac{(y-1)^2}y\right\rvert\\&\leqslant\frac23\lvert x-1\rvert\lvert y-1\rvert+\frac43\lvert y-1\rvert^2.\end{align}So, take $\delta>0$ such that $\delta<\frac12$ and that $\delta<\frac\varepsilon2$. Then, if $\lVert(x,y)\rVert=r<\delta$, you have $\lvert x-1\rvert,\lvert y-1\rvert\leqslant r$ and\begin{align}\frac{\left\lvert f(x,y)-f(1,1)-\varphi\bigl((x,y)-(1,1)\bigr)\right\rvert}{\lVert(x,y)\rVert}&\leqslant\frac23r+\frac43r\\&=2r\\&<2\delta\\&<\varepsilon.\end{align} |
Strange old multiplication table found in Oklahoma school | This must be for an in-class multiplication exercise. It has 22 numbers from 2-12 (excluding 10) arranged in semi-random order around the outer rim, and multipliers 2x-8x in a column in the middle.
Note that the numbers on the outer rim don't occur with equal frequency. The frequencies are:
2
3 3 3
4 4
5 5
6 6 6
7 7 7 7
8 8 8
9 9
11
12
I interpret the frequency as indicating importance of practicing the different multiplications. Multiplication by 1 and 10 are too easy to need practice; by 2 is next easiest; then 4,5,9; then 3,6,8; finally 7 is hardest. The largest numbers, 11 and 12, are de-emphasized as less important to learn by rote.
Probably the exercise worked by the teacher selecting a multiplier from the center column (e.g. "3x") and a starting point on the circle (e.g. 8 on top). She would then call on a student who would answer aloud "three times eight is twenty-four!" With a correct answer she would then call on another student to do the next problem: "three times seven is twenty-one!" This would repeat in rapid succession with the teacher picking students in no particular order. When the teacher was satisfied with the performance on "3x" she would pick another multiplier and continue.
The reason to have the numbers in unsorted order around a circle is to facilitate practicing the multiplications in unsorted order. A circle is used so the exercise can continue indefinitely without the teacher having to pause the flow. And the teacher's full attention can be on the students; she doesn't need to look back at the board at all. The students are called in random order so everyone needs to follow along, preparing to answer each question in case they are called upon. Then students see the numbers to multiply, do the problem mentally, and then hear the answer called out in formulaic singsong. This repeats.
The same exercise could work with all students calling out answers in unison. That version emphasizes even more the rote singsong memory, but would allow weaker students to slide through mumbling along with little fear of being called out on it. That might serve as a warmup or conclusion to the individuated version, to bring the class together. |
Why are these definitions of groups of central type equivalent? | The Theorem $1$ of the following paper answers our question $1$. DeMeyer, F. R., Janusz, G.J. Finite groups with an irreducible representation of large degree. Math. Z. 108 (1969).
Answer to 2nd Question: Assume that $\mathbb{C}^fG$ is simple. Therefore, $G$ has an irreducible complex $f$-projective character $\chi$ such that ${\chi(1)}^2=|G|$. It is well known that $\chi$ determines an irreducible ordinary character $\chi^*$ of the same degree on $G^*$, where $G^*$ is a central extension of $G$ (namely a Schur cover of $G$). Now we have $G^*/C \cong G$ and ${\chi^*(1)}^2 = |G| = [G^* : C]$. Hence, $G^*$ is of central type (see definition 1). Restricting the character $\chi^*$ to $C$, we have ${\chi^*}_C = \chi^*(1) \lambda$, where $\lambda$ is a linear character of $C$. Now ${\lambda}^{G^*}(1) = [G^* : C]$. On the other hand, $\chi^*(1) \chi^*$ is a constituent of ${\lambda}^{G^*}$. Comparing the degrees we obtain that ${\lambda}^{G^*} =\chi^*(1) \chi^* $. Now the Howlett-Isaacs theorem yields. |
Inequality involving ArcTan | It is known as the Shafer-Fink inequality.
Here you may find different proofs and some improvements, too. |
Does $\int^{\infty}_{0}\frac{\cos ^2\left(3x+1\right)}{\sqrt{x}+1} dx$ converge/diverge? | Perhaps this approach is the most explicit
$$
I\equiv\int^{\infty}_{0}\frac{\cos ^2\left(3x+1\right)}{\sqrt{x}+1} dx
$$
the integrand function is always positive so if we show that the integral still diverges when done on a subset of $\mathbb{R}^+$ we are done, i'm taking as a subset the one that makes
$$
\cos{(3x+1)}\ge\frac{1}{\sqrt2}
$$
It's easy to show that the subset requested is this: $S={\cup}_{k=1}^{\infty}\;\left[A_k,B_k\right]$, with $A_k$ and $B_k$ given by:
$$
A_k=\frac{2 \pi k}{3}-\frac{1}{3}-\frac{\pi }{12}
$$
$$
B_k=\frac{2 \pi k}{3}-\frac{1}{3}+\frac{\pi }{12}
$$
since the integrand function is positive it holds: $$
S \subset \mathbb{R} \implies \int_S\cdots\le\int_{\mathbb{R}^+}\cdots
$$
we have:
\begin{multline}
I\ge\sum_{k=1}^{\infty}\int_{A_k}^{B_k}\frac{\cos ^2\left(3x+1\right)}{\sqrt{x}+1} dx\ge{1\over2}\sum_{k=1}^{\infty}\int_{A_k}^{B_k}\frac{1}{\sqrt{x}+1} dx\ge{1\over2}\sum_{k=1}^{\infty}\int_{A_k}^{B_k}\frac{1}{\sqrt{B_k}+1} dx=\\
={1\over2}\sum_{k=1}^{\infty}\frac{B_k-A_k}{\sqrt{B_k}+1}={\pi\over 12}\sum_{k=1}^{\infty}\frac{1}{\sqrt{B_k}+1}
\end{multline}
and $$
\frac{1}{\sqrt{B_k}+1}\approx {\sqrt{3}\over \sqrt{2\pi k}}\implies\sum_{k=1}^{\infty}\frac{1}{\sqrt{B_k}+1}\to\infty
$$
by direct comparison:
$$
I \ge \sum_{k=1}^{\infty}\frac{1}{\sqrt{B_k}+1}\to\infty \implies I\to\infty
$$
so the integral is divergent. |
My maths Question is on Probability | There are 6 possible outcomes for the first throw and 6 possible outcomes for the second throw, so you can model each event as a pair $(a, b) \in \{1, \dotsc, 6\}^2$.
Among the relevant events ("first number = second number") are the pairs $(2,2)$ and $(5,5)$. The pair $(6,5)$ is not.
What are all the relevant events?
Your probability is the number of relevant events divided by the number of all possible events. (The result is different from the one you gave) |
Shouldn't tan(x) be a continuous function | The domain of the tangent function is $\mathbb{R}\setminus\left(\frac\pi2+\pi\mathbb{Z}\right)$ and it is continuous. Since, if $k\in\mathbb Z$, the limit $\lim_{x\to\frac\pi2+k\pi}\tan x$ doesn't exist (in $\mathbb R$), you cannot extend it to a continuous function from $\mathbb R$ to $\mathbb R$. |
Reducing modulo 5 does not make the equations equivalent? | Here, $5^n \equiv 0 \mod 5$, except for $n = 0$.
Therefore, we have to make a distinction between $n = 0$ and $n \ne 0$.
Moreover, $\phi(5) = 4$.
Therefore,
$$ 2^n \mod 5 \equiv 2^{n \mod 4} \mod 5 $$.
As $n = 0$ is a special case, we have to consider $n = 0, 1, 2, 3, 4$.
Concerning $p$: if $p$ is a solution, then $p+5$ is a solution too. Then, for $p$, we have only to consider its value modulo 5.
At this point, it is easy to consider the different cases.
If $n = 0$:
$$p^2 = 2 \mod 5$$
It is easy to check that $p^2$ is equal to $0, 1, 4$ modulo 5. So, no solution
If $n \ne 0$:
$$p^2 \equiv 2^n \mod 5$$
$$n = 1 \implies p^2=2 \mod 5 \implies no\,solution\,in\,p$$
$$n = 2 \implies p^2=4 \mod 5 \implies p=2\,or\,p=3$$
$$n = 3 \implies p^2=3 \mod 5 \implies no\,solution\,in\,p$$
$$n = 4 \implies p^2=1 \mod 5 \implies p=1\,or\,p=4$$ |
Why do we use a $z$-test rather than a $t$-test when estimating an appropriate sample size? | A general rule of thumb is:
If the sample size $n$ is large (for example, larger than $30$), then by the Central Limit Theorem, a $z$-test is appropriate. Otherwise, for small $n$ (for example, smaller than $30$), a $t$-test is appropriate.
Now suppose we want to go backwards. Instead of figuring out which type of test to use based on the sample size, we want to know how big the sample size should be based on how accurate we want our confidence interval to be (so that the standard error is within a certain threshold). Generally speaking, we want our standard error to be small. Hence, the sample size should be large; large enough so that the Central Limit Theorem applies so that the Student $t$ distribution is no longer necessary and we can use the Normal distribution. |
Proving that the empty glass is a deformation retract of the full glass | While @String gave you the explicit formula for $\theta$ I will show you how to generalize it to any "good enough" closed subset and a projection onto it.
So what is $\theta$?
More generally consider a Banach space $E$, a fixed vector $w\in E$ and a closed subset $A\subseteq E$ such that $w\not\in A$. Now for any $v\in E$ consider following sets
$$I_v=\big\{tv+(1-t)w\ \big|\ t\in\mathbb{R}\big\}$$
$$A_v=I_v\cap A$$
$$B=\{v\in E\ |\ A_v\neq\emptyset\}$$
and the multivalued function
$$f:B\multimap E$$
$$f(v)=A_v$$
I leave as an exercise that $f$ is lower semicontinuous.
Now you can easily check that $A_v$ is closed. Assume that $A_v$ is additionally convex for each $v\in B$. Since $B$ is paracompact (as a metric space) the Michael selection theorem applies to $f$ and thus there exists a continuous selection $s:B\to E$ (i.e. $s(v)\in f(v)$).
In our case $E=\mathbb{R}^3$, $w=(0,0,2)$ and $A=E^2\times\{0\}\cup S^1\times I$. You can easily check that $A_v$ has exactly one point if non-empty. Thus $A_v$ is convex and by the Michael selection theorem there is a continuous selection. Note that this selection is unique (simply because $A_v$ has one point). The restriction of that selection to $X$ is equal to $\theta$. |
Distance of a point to a line | The equation of a plane perpendicular to the line is
$$x+y+z=a.$$ If this plane passes through $(2,2,1)$ then $a=5$.
So the plane $x+y+z=5$ intersects the line when
$$3t+6=5$$
so $t=-\frac{1}{3}$ and now you just need the distance between $S$ and $(\frac{5}{3},\frac{5}{3}, \frac{5}{3})$. |
Cylinders and dyadic Intervals | I’ve seen at least two definitions of cylinder set.
For each finite, non-empty $F\subseteq Z^+$ and function $\sigma:F\to\{0,1\}$ let $$B(\sigma)=\left\{x\in\Sigma_+:x\upharpoonright F=\sigma\right\}\;;$$ the cylinders are the sets $B(\sigma)$.
The cylinders are the sets $B(\sigma)$ defined in (1) for which $\operatorname{dom}\,\sigma$ is an initial segment of $\Bbb Z^+$.
Assuming that your definition is the second one, the result is straightforward.
Fix $n\in\Bbb Z^+$ and $m\in\{0,\dots,n-1\}$, and let $D=\left[\dfrac{m}{2^n},\dfrac{m+1}{2^n}\right]$. Let
$$m=\sum_{k=0}^{n-1}\frac{b_{n-k}}{2^k}\;,$$ where each $b_k\in\{0,1\}$, so that $b_1\dots b_n$ is the $n$-bit binary representation of $m$, and let
$$\sigma:\{1,\dots,n\}\to\{0,1\}:k\mapsto b_k\;.$$
I claim that $D=\varphi\big[B(\sigma)\big]$, i.e., that $\varphi(x)\in D$ if and only if $x_k=b_k$ for $k=1,\dots,n$.
If $x\in B(\sigma)$, then
$$\varphi(x)=\sum_{k\ge 1}\frac{x_k}{2^k}=\sum_{k=1}^n\frac{b_k}{2^k}+\sum_{k>n}\frac{x_k}{2^k}=\frac{m}{2^n}+\sum_{k>n}\frac{x_k}{2^k}\;,$$
and
$$0\le\sum_{k>n}\frac{x_k}{2^k}\le\frac1{2^n}\;,$$
so $\varphi(x)\in D$, and $\varphi\big[B(\sigma)\big]\subseteq D$. On the other hand, we know that
$$\left\{\sum_{k>n}\frac{x_k}{2^k}:x_k\in\{0,1\}\text{ for }k>n\right\}=\left\{\frac1{2^n}\varphi(x):x\in\Sigma_+\right\}=\left[0,\frac1{2^n}\right]\;,$$
so in fact $\varphi\big[B(\sigma)\big]=D$. |
Show that $f$ maps the entire unit disc onto itself. | If $w \in D$ and $w \not \in f(D)$ then
$$
z \mapsto \frac{1}{f(z)-w}
$$
is holomorphic on $\overline{D}$. By the maximum modulus principle, for any $z \in D$
$$
\left|\frac{1}{f(z)-w}\right| \leq \max_{\omega \in \partial D} \left| \frac{1}{f(\omega)-w} \right| \leq \max_{\omega \in \partial D}\frac{1}{\left||f(\omega)|-|w|\right|} = \frac{1}{1-|w|}
$$
so $|f(z) - w| \geq 1-|w|$. This means that $D \setminus f(D)$ is open. By the open mapping theorem either $f$ is constant or $f(D)$ is open. The latter implies that $f(D)=D$ since $D$ is connected.
Moreover, the image of a compact set under a continuous function is compact. Therefore if $f$ is not constant then $f(\overline{D}) = \overline{D}$.
Or, alternatively, this bound shows that $w$ can be moved in a straight line towards $0$ while the radius of the "image free" disc around it increases. For $w=0$ the bound becomes $|f(z)| \geq 1$ so that $f(D) \cap D = \emptyset$. So either $f(D)$ contains all of $D$ or avoids it entirely. In the latter case $f$ maps into the unit circle. This would mean that $\overline{f} = f^{-1}$ but $\overline{f}$ can be holomorphic (complex differentiable) only if $f'$ vanishes identically. The conclusion is that either $f(D)=D$ or $f$ is constant. |
Intersection of ranges of projections | Let $H$ be a Hilbert space with orthonormal basis $(e_n)$. Let $p\in B(H)$ be the projection onto the span of the vectors $e_{2n}$ and let $q\in B(H)$ be the projection onto the span of the vectors $e_{2n}+2^{-n}e_{2n+1}$.
Note now that $p-q$ is a compact operator (it can be written as the norm limit of the finite rank operators $p_n-q_n$ where $p_n$ and $q_n$ are the corresponding projections onto the spans of just the first $n$ of the basis vectors). So, $p$ and $q$ are equal and nonzero projections in the quotient $B(H)/K(H)$. Now take a faithful representation of $B(H)/K(H)$ on some Hilbert space $H'$, so $p$ and $q$ act by the same nonzero projection on $H'$. Then $B(H)$ has a natural faithful representation on $H\oplus H'$, and the ranges of $p$ and $q$ have nontrivial intersection in this representation. However, in the faithful representation on just $H$, the ranges of $p$ and $q$ have trivial intersection. |
Holomorphic function definition. Am I missing something very obvious? | The answer really is in LeBtz comment, but I would like to add something. The continuity assumption on $\frac{df}{dz}$, written $f\in C^1$, is superfluous but it might be useful pedagogically: it allows for the use of the basic theorems on differential forms to develop the theory. (So it is useful as long as one already knows something about differential forms).
Namely, if one writes $f=u+iv$ and computes formally
\begin{equation}
\begin{split}
f\, dz&=(u+iv)(dx+idy) \\ &=(u\,dx-v\,dy)+i(u\,dy+v\,dx)\\ &\stackrel{\text{def}}{=}\omega+i\nu,
\end{split}
\end{equation}
then one can prove the following unsurprising result: if $\gamma$ is a regular enough curve in the plane, one has
$$
\int_{\gamma}f\, dz=\int_\gamma \omega + i\int_\gamma \nu.
$$
(If the instructor wants to save time, she might even use this formula to define the left hand side, provided the audience already knows what the integral of a differential form is). The differential forms $\omega$ and $\nu$ are closed (meaning that they are $C^1$ and their exterior derivative vanishes, i.e., the cross derivatives of their Cartesian components are equal) precisely when $f$ satisfies Cauchy-Riemann equations.
If one is working in the $C^1$ setting, necessary in the theory of differential forms, one obtains the integrability theorem for free: every holomorphic function in a simply connected domain has a primitive. In particular, and that's very important, one has
$$
\oint_\gamma f(z)\, dz =0
$$
for every closed curve $\gamma$ that does not enclose a singularity of $f$.
This last result is arguably the cornerstone of complex analysis. Note that we have obtained it quickly and easily by appealing to the language of differential forms. An instructor may choose to focus attention on this result and sweep the technical detail of the $C^1$ assumption under the rug. Later, one will discover that any holomorphic function is much smoother than $C^1$ anyway. |
$\exists x ~ \forall y ~ f(x, y) \iff \forall y() ~ \exists x~ f(x, y(x))$ Name? Proof? | Consider the dual statement:
$$
\forall x \exists y\, R(x,y) \iff \exists f \forall x\, R(x, f(x)).
$$
(I've used more typical variables in stating it; no good purpose is served by using "$y$" as a variable for two different sorts of things.) This is essentially the Axiom of Choice (AC), and is provable using AC. $f$ is called a uniformizing function for $R$. If further constraints are placed on $R$ and $f$ (such as, "$R, f$ are $\Pi^1_1$", or "$R, f$ are projective"), it may or may not hold in a theory (and may or may not be provable even in ZFC). See the wikipedia article on uniformization for more. |
Art Gallery Problem - is covering the edges enough? | No. On the picture below guards in green corners can see the whole perimeter but can not see the region with red points. |
Geometry involving a diagram and inequalities | There are 20 segments of fencing with fence posts 10 feet apart, placed horizontally or vertically: hence we've got at least $$20 \times 10 = 200 + x\;\text{ feet of along the perimeter}$$.
Now, there is an extra fence of length x, which we need to add to complete the computation of the Perimeter P: the diagonal piece, which can be thought of as the diagonal of a $10\times 10$ foot square, and has length (diagonal) of $10\sqrt 2$
$10$ ft $\times 10$ ft square with diagonal $x = 10\sqrt 2$. (Consider the fence piece being the diagonal from top left to bottom right).
By the Pythagorean Theorem: $$x^2 = 10^2 + 10^2 \implies x = \sqrt{100 + 100} = \sqrt{2\times 100} = \sqrt 2 \times \sqrt {100} = 10\times\sqrt 2 \approx14.1421$$
Hence the perimeter is $$200 + x = 200 + 10 \sqrt2 \approx214.1421 > 210 \text{ feet}$$
And note that by the triangle inequality, the length of the diagonal, $10\sqrt 2 < 10 + 10 = 20$, which is less than the sum of two pieces of fencing of length 10.
So, putting that together, we have $$230 \gt 220 \gt 200+10\sqrt{2} = P \gt 210$$
That leaves us with only option (A) being correct: $P > 210$ feet. |
Convert section of an equation to a cubic Bezier-curve | The definition of a cubic Bezier curve requires 4 points. The starting and ending points and two additional reference points. In general, the curve will not pass through the reference points. So, to answer your question, there is not a unique way of "converting" the graph of $f$ into a Bezier curve... You need to specify two additional points $(x_1,f(x_1)), (x_2, f(x_2))$ for some $0 < x_1 < x_2 < 1$.
Considering $P_0 ⁼(0,0)$, $P_1=(\frac 13 f(\frac 13))$, $P_2=(\frac 12, f(\frac 23))$ and $P_3=(1,1)$, the parametric equation of the Bezier curve is
$$
(1-t)^3 P_0+ 3 (1-t)^2 t P_1 + 3(1-t) t^2 P_2+t^3P_3, \quad t \in [0,1]
$$
or in this specific case,
$$
\left\{
\begin{array}{l}
x(t)=t^3+2 (1-t) t^2+\frac{1}{3} (1-t)^2 t\\
y(t)=t^3+\frac{6}{7} (1-t) t^2+\frac{1}{11} (1-t)^2 t
\end{array}
\right.
$$
This was obtained with the reference points over the curve. You can also run an optimization procedure to get the reference points that minimize the distance between the original curve and the Bezier cubic. In the following images you can see whet you obtain with the reference point over the curve and placed elsewhere. |
Finding the component of a vector tangent to a circle | I think that the simplest way would be like this:
Firstly, find the angle between (the line from the center of the circle to the midpoint of the Vector) and (the Vector itself). (Do you know how to do this? If not, comment and we can expand). I'm capitalizing the V in the Vector we start with to distinguish it.
Once you have this angle $\theta$, then the length of the tangent you're looking for is given by $L\sin^{} \theta$, where $L$ is the length of the Vector. And whether it's clockwise or counterclockwise should come from your calculation of the angle.
Or we could do everything with complex numbers. If the angle that the midpoint-line makes with the positive real axis is $\theta$, then multiply the complex representatives of the endpoints of the Vector by $e^{-i\theta}$, and calculate the imaginary part of the difference $d$ between the endpoints. Then positive and negative correspond to positive and negative imaginary part, respectively. And for that matter, the tangent vector is given by $e^{i \theta} \text{Im}(d)$, where Im stands for the imaginary part.
If you know some complex, then you'll see we rotated the vector so that our tangent point on the circle is always $(1,0)$, so we took the $y$ component and rotated it back. |
Computing $\sum_{i=2}^n \sum_{j=2}^i {n \choose i}{i \choose i-j}{j \choose 2}$ | Using the subset-of-a-subset identity
$\binom ab\binom bc=\binom ac\binom {a-c}{b-c}$,
we have
$$\begin{align}
\binom ni \binom i{i-j} \binom j2
&=\binom ni\underbrace{\binom ij\binom j2}\\
&=\underbrace{\binom ni\binom i2}\binom {i-2}{j-2}\\
&=\binom n2\binom {n-2}{i-2}\binom {i-2}{j-2}
\end{align}$$
Hence
$$\begin{align}
\sum_{i=2}^n \sum_{j=2}^i \binom ni \binom i{i-j} \binom j2
&=\sum_{i=2}^n \sum_{j=2}^i \binom n2 \binom {n-2}{i-2} \binom {i-2}{j-2}\\
&=\binom n2 \sum_{i=2}^n \binom {n-2}{i-2} \sum_{j=2}^i \binom {i-2}{j-2}\\
&=\binom n2\sum_{i=2}^n \binom {n-2}{i-2} 2^{i-2}\\
&=\binom n2 (1+2)^{n-2}\\
&=\color{red}{\binom n2 3^{n-2}}
\end{align}$$
Alternatively,
$$\begin{align}
\binom ni\binom i{i-j}\binom i2
&=\binom n2\binom {n-2}{i-2}\binom {i-2}{j-2}\\
&=\binom n2\underbrace{\binom {n-2}{n-i}\binom {i-2}{i-j}\binom {j-2}{j-2}}\color{lightgrey}{1^{n-i}1^{i-j}1^{j-2}}\\
&=\binom n2 \binom {n-2}{n-i,i-j,j-2}\color{lightgrey}{1^{n-i}1^{i-j}1^{j-2}}\\
\end{align}$$
Using the multinomial summation identity
$$\sum_{p+q+r=m}\binom m{p,q,r}x^py^qz^r=(x+y+z)^m$$
we have
$$\begin{align}
\sum_{i=2}^n\sum_{j=2}^n\binom ni\binom i{i-j}\binom i2
&=\binom n2\sum_{(n-i)+(i-j)+(j-2)=n}\binom {n-2}{n-i,i-j,j-2}\color{lightgrey}{1^{n-i}1^{i-j}1^{j-2}}\\
&=\color{red}{\binom n2 3^{n-2}}\end{align}$$ |
Linear independent vectors and bases in a room | HINT
To determine a pair of linear independent vectors among u, v and w, consider the matrix with the vectors as rows and obtain the RREF.
Once we have two linearly independent vectors the third can be found by orthogonality condition by dot product. |
Complement of a simply connected set is simply connected | I consider path-connectedness to be part of "simply connected". As a counterexample to your question when the set is not closed, take the Warsaw circle (a closed up topologist's sine curve), delete a point on the "reasonable" part of the curve, and take the complement of the result. The punctured Warsaw circle is not path-connected, but its complement is simply connected.
The complement of the obvious $S^2$ in $S^n$ is homotopy equivalent to $S^{n - 3}$, so no (take $n=3,4$). You can do arbitrarily badly; many many groups appear as the fundamental group of knotted 2-spheres in $S^4$, and the same for n-spheres in $S^{n+2}$ etc.
Indeed, fix $n \geq 3$; Kervaire proved that a finitely presented group $G$ is the fundamental group of the complement of some smoothly embedded $S^n \hookrightarrow S^{n+2}$ iff the abelianization of $G$ is $\Bbb Z$, $H_2(G;\Bbb Z) = 0$, and $G$ is normally generated by some element $m$. These are not too violent to ask. |
Subgroup $(I, +)$ has finite index in $(R, +)$ | Basic idea: $a\in I$ also means $a\sqrt7\in I$. Now show that any element of $R/I$ can be written as $x + y\sqrt7 + I$ with $0\leq x, y<a$.
For the second part, where you're given an arbitrary non-zero $I$ instead of an $I$ containing an integer, you have to take an arbitrary non-zero element of $I$ and use that to show that there is a non-zero integer in $I$. The rest follows from part one. |
Meaning of $\lor$ and $\land$ | If you represent false and true as $0$ and $1$ respectively, $x\land y=\min\{x,\,y\}$ while $x\lor y=\max\{x,\,y\}$ (verify these with a truth table if you like). This $\land=\min,\,\lor=\max$ identification is the basis of semilattices. Using it on real numbers is just a very boring example of a lattice (i.e. a semilattice with both, satisfying absorption laws). |
$\overline{\theta}$ the maximum likelihood estimator of $\theta \implies$? | It is true for any function $g$, as can be seen in many introductory book on mathematical statistics. This is called the invariance property of the MLE (or sometimes Zehna's theorem).
The idea of the proof is that the transformation $g$ induces a likelihood on its target, and the maximizer of this induced likelihood is the maximum likelihood estimator.
Giving some more details: suppose $g:\Lambda\to\Theta$ is a transformation of your parameter space. For $\theta\in \Theta$, one can define
$$\Lambda_\theta = \{\lambda \in\Lambda: g(\lambda)=\theta\}.$$
The induced likelihood I'm referring to above is given by
$$M(\theta) = \sup_{\lambda\in\Lambda_\theta}L(\lambda),$$
where $L$ is the likelihood function on the original parameter space. |
Graph not obeying first derivative test | The function is not differentiable at $0$. So the behavior just below zero and just above zero could be quite different (and in this case, they are quite different). |
Is our interest in $\mathbb{R}$ "historical"? | I think you can definitely try and make a distinction between fields of mathematics that are "founded on $\mathbb{R}$" in a strong sense and those that aren't (but in which our familiarity with $\mathbb{R}$ and its cousins might play and important role in terms of providing examples, intuition, driving questions, etc).
Consider the following fields:
Set theory. I would argue that the real numbers play no foundational role in set theory. You can define and investigate the notions of cardinality, partial orders, etc without even bothering to construct the real numbers. Of course one can say that our familiarity and interest in the real numbers is the reason set theory was created in the first place and problems like the continuum hypothesis (which involve the real numbers) served as important foundational problems in the field, but from a mathematical point of view, I would say that the interesting set-theoretic questions that involve the real numbers can be viewed as "applications" of set theory to specific situations and that the core of set theory has nothing to do with the real numbers.
Group theory or any abstract algebra field. Again, one can investigate the abstract properties of groups without knowing anything about the real numbers. The familiarity with the real numbers can give us important examples of groups (such as Lie groups) but again, results about such groups can be viewed as applications of group theory to $\mathbb{R}$-founded objects.
Smooth manifolds. Here, we are working with objects that are modelled locally on $\mathbb{R}^n$ and we are generalizing our ability to do calculus over the reals to this more abstract setting. This is a field that in my opinion is "strongly founded on $\mathbb{R}$". You can of course try to abstract away the properties that make the theory work (what do you need in order to do calculus, etc) and people have done that resulting in fields that are much less $\mathbb{R}$-founded but investigating smooth manifolds remains very $\mathbb{R}$-founded.
Now, regarding general topology, I would like to argue that while it has many applications to objects that are "founded on $\mathbb{R}$" and fields that are founded on $\mathbb{R}$ in a strong sense (such as manifold theory), it is, not, per se, a field that is founded on $\mathbb{R}$ and the real numbers don't play a particularly important role.
The basic players in topology are defined using an abstract family of axioms (very much like in group theory). By imposing additional restrictions (separation axioms, again, defined in terms of the basic operations), we can single out specific classes of topological spaces. For example, one might define a family of spaces that are regular, Hausdorff and have countably locally finite basis and investigate their properties. It turns out by the Nagata-Smirnov metrization theorem that such spaces are precisely the topological spaces that admit a metric but a priori we can investigate the topological properties of such spaces without introducing a metric at all. Choosing a metric on such a space can be considered as introducing an "axillary" data that helps us (as we are familiar with the real numbers and properties of distance in Euclidean spaces) to analyze the family and describe its topological properties.
Regarding path-connectedness, the amazing answer of Eric to this question shows that the notion of path-connectedness can also be defined without introducing the interval $[0,1]$ and using the interval to define a path can be considered as introducing "axillary" data that helps us to visualize, give intuition and analyze path-connected spaces.
Of course, the history went the other way and we care about paths because we visualize them as generalization of paths in an Euclidean space and we care about seperation axioms because we care about metric spaces and want to understand which parts that hold in metric spaces can be "abstracted away" but once the abstraction has been done, the real numbers stop playing a foundational role. |
Homeomorphisms between a circle and a square | No. You can easily construct homeomorphisms of $S^1$ to $S^1$ that do not preserve antipodal points, and then compose one of those with a map that radially projects points on $S^1$ to the square to get a map which also doesn't preserve antipodal points.
For the former, for example, map the upper half of the circle to only the first quarter, and then map the lower half ("$3\times$ faster") to the other three quarters. |
If $0$ is the unique critical value of $f$, than, with $ab>0$, $f^{-1}(a)$ and $f^{-1}(b)$ are diffeomorphics | This is basically the First Fundamental Theorem of Morse theory. As in the comments, define
$X_x=\begin{cases} \frac{\nabla f(x)}{\|\nabla f(x)\|^2} \quad x\in f^{-1}((a-\epsilon,b+\epsilon))\\ 0\quad \quad \quad \text{otherwise}\end{cases}$
This is well-defined because there are no singularities of $f$ between $a$ and $b$ so there is some $\epsilon>0$ neighbrhood on which this is still true. (why?).
All this is is the normalized gradient vector restricted to a certain piece of the domain of $f$.
Notice that $\phi:\mathbb R\times \mathbb R^3\to \mathbb R^3$ defined by $(t,x)\mapsto \frac{1}{\|\nabla f(x)\|^2}(t\partial_1f(x),\cdots,t\partial_3f(x) )$ satisfies
$X_{\phi(t,x)}=\frac{d(\phi(t,x))}{dt}$ and is a diffeomorphism for each $t\in \mathbb R.$ Note also that $\phi_{-t}\circ \phi_t=\text{id}.$
The claim is now that $\phi_{b-a}$ is a diffeomorphism onto $f^{-1}(b)$ when restricted to $f^{-1}(a).$
We have $\frac{df\circ \phi(t,x)}{dt}\overset{\text{chain rule}}{=}\langle \frac{d\phi(t,x)}{dt},\nabla f(x)\rangle=\langle X_{\phi(t,x)},\nabla f(x)\rangle.$
It follows from this that $\frac{df\circ \phi(t,x)}{dt}=\begin{cases} 1 \quad x\in f^{-1}((a-\epsilon,b+\epsilon))\\ 0\quad \quad \quad \text{otherwise}\end{cases},\ $ and so
$f\circ \phi(t,x)=\begin{cases} t+f(x) \quad x\in f^{-1}((a-\epsilon,b+\epsilon))\\ f(x)\quad \quad \quad \text{otherwise}\end{cases}.$
Now, consider the diffeomorphism $\varphi:=\phi_{b-a}|_{f^{-1}(a)}.$ If $x\in f^{-1}(a)$ then $\phi_{b-a}(b-a,x)=b-a+f(x)=b-a+a=b$ so $\varphi$ maps $f^{-1}(a)$ into $f^{-1}(b).$
Now suppose $y\in f^{-1}(b)$ and let $x$ be such that $\phi_{a-b}(y)=x.$ Then, $\varphi(x)=\phi_{b-a}\circ \phi_{a-b}(y)=y$ so in fact, $\varphi$ maps $\textit{onto}\ f^{-1}(b)$ and we are done. |
Why is this 3-manifold irreducible? | I'm going to extract the essential argument from the general theory of Seifert-fibered spaces. Let $M=\mathbb{R}\mathrm{P}^2\times S^1$, which as a Seifert-fibered space is very special since it is in fact a product $S^1$-bundle over $\mathbb{R}\mathrm{P}^2$.
Let $\alpha\subset\mathbb{R}\mathrm{P}^2$ be a non-separating simple closed curve (an image of a geodesic in $S^2$). Then, $T=\alpha\times S^1\subset M$ is a torus with $M-T$ a solid torus. Let $D=\mathbb{R}\mathrm{P}^2\times\{0\}-T$, which is a disk cross-section of this solid torus, and furthermore let $A=T-\mathbb{R}\mathrm{P}^2\times\{0\}$, which is an annulus. Then, $B=M-(A\cup D)$ is a ball. (Note: I am being cavalier with what I mean by a complement, where to be completely accurate I should be removing tubular neighborhoods.) We can use the sorts of techniques of normal surface theory, as seen in the proof of Kneser's prime decomposition theorem, to put a given surface in a nice position relative to $B$, $A$, and $D$.
Consider an embedded sphere $S\subset M$ that we presume does not bound a ball. By an isotopy, we can assume $S$ is transverse to $A$ and $D$. Suppose $S\cap A$ contains a closed loop that bounds a disk in $A$, and take the innermost such loop and disk. With it, we may do surgery on $S$ to get a pair of spheres. If both spheres bounded balls in $M$, then so too would $S$, so at least one does not; replace $S$ with this sphere. Hence, we may assume $S\cap A$ is a collection of arcs and essential loops in $A$. Similarly, we may assume $S\cap D$ is a collection of arcs.
Consider an innermost arc of $S\cap D$ that bounds a lune in $D$ such that the lune does not contain an antipodal pair of points in $\partial D$ (in the sense that $\partial D$ is a double cover of $\alpha$). We can isotope $S$ along this lune to remove a pair of points of intersection in $S\cap \alpha$. Hence, $S\cap D$ contains only arcs that connect antipodal points of $\partial D$. By considering the $\mathbb{Z}/2\mathbb{Z}$ intersection number, $S\cap D$ contains at most one such arc.
If there were such an arc, $S\cap \alpha$ would be a single point, and so the only arc in $S\cap A$ would be one that connects the two boundary components. This implies there are no essential loops, so $S\cap B$ contains a single component, which must be a disk since $S$ is a sphere. But such a disk in $B$ would imply that $S$ is a torus or Klein bottle, contrary to the fact that it is a sphere. Thus, it must be the case that $S\cap D$ is empty.
It follows that $S\cap A$ is a collection of essential loops. Again, we can assume $S\cap B$ is a collection of disks by performing surgery on $S$ if necessary and keeping one of the two components. Each loop in $S\cap A$, then, corresponds to an $\mathbb{R}\mathrm{P}^2$ component of $S$, so $S\cap A$ is empty.
Hence, $S\subset B$, but Alexander's theorem would imply $S$ bounds a ball, yet $S$ bounds no such thing. |
Generated $\sigma$-algebras and borel sets on infinite products | The Borel $\sigma$-algebra on $\Bbb R^I$ is $\sigma(\mathcal T^I)$, where $\mathcal T^I$ is the product topology on $\Bbb R^I$. When $I$ is uncountable, this is strictly larger than the product $\sigma$-algebra $\mathcal B(\Bbb R)^I$. |
Improper Riemann Integral and Lebesgue Integral coincide when... | Assume without loss of generality that $I=(a,b)$ and consider $I_{n}=[a+1/n,b-1/n]$ for large $n$, assume that
\begin{align*}
(R)\int_{a}^{b}|f(x)|dx
\end{align*}
exists in improper Riemann sense, then
\begin{align*}
\infty>(R)\int_{a}^{b}|f(x)|dx=\lim_{n\rightarrow\infty}(R)\int_{I_{n}}|f(x)|dx,
\end{align*}
and so the Lebesgue integral of $|f|$ on $I$ that
\begin{align*}
(L)\int_{I}|f(x)|dx&=\lim_{n\rightarrow\infty}(L)\int_{I_{n}}|f(x)|dx\\
&=\lim_{n\rightarrow\infty}(R)\int_{I_{n}}|f(x)|dx\\
&=(R)\int_{a}^{b}|f(x)|dx,
\end{align*}
note that the first equality is by Monotone Convergence Theorem and also that for compact interval, a function is Riemann integrable then it is Lebesgue integrable and the integrals coincide. |
Characterize the commutative rings with trivial group of units | This is only a partial answer, to record some thoughts:
If $R$ is semilocal, i.e. has only finitely many maximal ideals (in particular, if $R$ is finite), then one can characterize these rings as precisely the finite products of copies of $\mathbb{F}_2$. If $R$ is semilocal, then any surjection $R \twoheadrightarrow R/I$ induces a surjection on unit groups $R^\times \twoheadrightarrow (R/I)^\times$ (see here for proof). In particular, for any maximal ideal $m$ of $R$, $R/m$ is a field with only one unit, hence must be $\mathbb{F}_2$. Since the Jacobson radical of $R$ is $0$, there is an isomorphism $R \cong \prod_{m \in \text{mSpec}(R)} R/m = \prod \mathbb{F}_2$ by Chinese Remainder, and conversely any finite product of $\mathbb{F}_2$'s does indeed have only one unit.
If there are infinitely many maximal ideals, then it is not clear if every residue field at a maximal ideal is $\mathbb{F}_2$. If this is the case though, then although Chinese remainder fails, one can still realize $R$ as a subring of a product of copies of $\mathbb{F}_2$, so we get a characterization in this case. Thus:
If $R$ is a subring of a direct product of copies of $\mathbb{F}_2$, then $R$ has trivial unit group. The converse holds if every maximal ideal of $R$ has index $2$; in particular it holds if $R$ is semilocal.
Update: There are more examples of such rings than products of $\mathbb{F}_2$ or polynomial rings over $\mathbb{F}_2$ though. If $S = \mathbb{F}_2[x_1, \ldots]$ is a polynomial ring over $\mathbb{F}_2$ (in any number of variables) then for any homogeneous prime ideal $P \subseteq S$ (necessarily contained in the irrelevant ideal $(x_1, \ldots)$), the ring $S/P$ has trivial unit group. Since the property of having trivial unit group passes to products and subrings, the same holds if $P$ is only assumed to be radical (and still homogeneous).
Conversely, any ring $R$ with trivial unit group is a reduced $\mathbb{F}_2$-algebra, hence has a presentation $R \cong \mathbb{F}_2[x_1, \ldots]/I$, where $I$ is radical. We can even realize it as a demohogenization $R \cong (\mathbb{F}_2[t, x_1, \ldots]/J)/(t-1)$, where $J$ is a homogeneous radical ideal. Thus if any dehomogenization (at a variable) of a ring of the form $S/I$, where $I$ is a homogeneous radical ideal, has trivial unit group, this would yield a characterization. This in turn is equivalent to asking whether or not the multiplicative set $1 + (t - 1)$ is saturated in $S/I$ (at this point, I must leave this line of reasoning as is, but would welcome any feedback).
Update 2: Upon reflection, it's easy to see that not every dehomogenization of a graded reduced $\mathbb{F}_2$-algebra will have trivial unit group, i.e. for $\mathbb{F}_2[t,x,y]/(xy - t^2)$, setting $t = 1$ gives $\mathbb{F}_2[x,y]/(xy - 1)$ which has nontrivial units. I'll have to think a little more about the right strengthening of the condition on $I$. |
adaptation of Hahn-Banach for destination space $\ne \Bbb R$ | First, you can uniquely extent $T$ to $\bar F$. Then use the orthogonal decomposition of $H = \bar F \oplus F^\perp$ and set $T=0$ on $F^\perp$. |
Set Theoretic Definition of Numbers | Yes. And no.
You start with $\mathbb{N}$, and define $+$ and $\times$ (and $\lt$ and so on) appropriately.
Then you define an equivalence relation on $\mathbb{N}\times\mathbb{N}$ given by
$$(a,b)\sim(c,d) \Longleftrightarrow a+d=b+c,$$
and call the quotient set $(\mathbb{N}\times\mathbb{N})/\sim$ by the name $\mathbb{Z}$. (Behind the scenes, we are thinking of $(a,b)$ as meaning "the solution to $a=x+b$").
We can then define an addition $+_{\mathbb{Z}}$ and a product $\times_{\mathbb{Z}}$ on $\mathbb{Z}$, as well as an order $\leq_{\mathbb{Z}}$ by
$$\begin{array}{rcl}
[(a,b)]+_{\mathbb{Z}}[(c,d)] &=& [(a+c,b+d)]\\\
[(a,b)]\times_{\mathbb{Z}}[(c,d)] &=& [(ac+bd,ad+bc)]\\\
[(a,b)] \leq_{\mathbb{Z}} [(c,d)] &\Leftrightarrow& a+d\leq b+c,
\end{array}$$
and show that this is well defined. (I am using $[(a,b)]$ to denote the equivalence class of the pair $(a,b)$.
Certainly, $\mathbb{N}$ and $\mathbb{Z}$ are entirely different animals; set-theoretically, you can even show that they are disjoint.
But we can define a map $f\colon \mathbb{N}\to\mathbb{Z}$ by $f(n) = [(n,0)]$. This map is one-to-one, and for all $n,m\in\mathbb{N}$,
$$\begin{array}{rcl}
f(n+m) &=& f(n)+_{\mathbb{Z}}f(m),\\\
f(n\times m) &=& f(n)\times_{\mathbb{Z}}f(m)\\\
n\leq m &\Leftrightarrow& f(n)\leq_{\mathbb{Z}} f(m)
\end{array}$$
That means that even though $\mathbb{N}$ and $\mathbb{Z}$ are disjoint, there is a "perfect copy" of $\mathbb{N}$ (in so far as its operations $+$ and $\times$ are concerned, and as far as the order $\leq$ is concerned) sitting inside of $\mathbb{Z}$. (Added: In fact, this $f$ not only gives us perfect copy, it is the only map from $\mathbb{N}$ to $\mathbb{Z}$ that is one-to-one and respects all the operations; we say it is a "canonical embedding"). Since we have this perfect copy, and a very specific map identifying this copy with the original, we can think of $\mathbb{N}$ as being a subset of $\mathbb{Z}$ by "identifying it with its copy". So we do that. We can then introduce notation by showing that for every $[(a,b)]\in\mathbb{Z}$, either $a=b$, or there exists $n\in\mathbb{N}$, $n\neq 0$, such that $[(a,b)]=f(n)=[(n,0)]$, or there exists $n\in\mathbb{N}$, $n\neq 0$, such that $[(b,a)]=f(n)=[(n,0)]$; and then using $0$ to denote the class with $a=b$, $n$ to denote the class with $[(a,b)]=[(n,0)]$, and $-n$ to denote the class $[(c,d)]$ with $[(d,c)]=[(n,0)]$. This notation makes the identification clearer.
Similarly, once we have $\mathbb{Z}$, we define $\mathbb{Q}$ as the quotient of $\mathbb{Z}\times(\mathbb{Z}-\{0\}$ modulo $\cong$, where
$$(a,b)\cong (c,d) \Longleftrightarrow ad=bc$$
(behind the scenes, we think of $(a,b)$ as meaning "the solution to $a=xb$").
We can then proceed as before, defining
$$\begin{array}{rcl}
[(a,b)]+_{\mathbb{Q}}[(c,d)] &=& [(ad+bc,bd)]\\\
[(a,b)]\times_{\mathbb{Q}}[(c,d)] &=& [(ac,bd)]
\end{array}$$
and showing this is well defined; defining an order, etc. Again, $\mathbb{Q}$ and $\mathbb{Z}$ (and the original $\mathbb{N}$) are completely different sets. But we have a function $g\colon\mathbb{Z}\to\mathbb{Q}$ defined by $g(a) = [(a,1)]$. This is one-to-one, $g(a+_{\mathbb{Z}}b) = g(a)+_{\mathbb{Q}}g(b)$, and $g(a\times_{\mathbb{Z}}b) = g(a)\times_{\mathbb{Q}}g(b)$. (Added: And again, this is the only map from $\mathbb{Z}$ to $\mathbb{Q}$ that satisfies these conditions.) So again, we have a "perfect copy" of $\mathbb{Z}$ sitting inside of $\mathbb{Q}$ (and so also a perfect copy of the perfect copy of $\mathbb{N}$ that is sitting inside of $\mathbb{Z}$). So once again we "identify" $\mathbb{Z}$ with its image inside $\mathbb{Q}$ (and so we identify $\mathbb{N}$ with its image inside the image of $\mathbb{Z}$). Because, via $f$ and $g$, we have perfect copies of them anyway.
We do the same thing with $\mathbb{Q}$ as being "inside of $\mathbb{R}$", by identifying elements of $\mathbb{Q}$ with specific Dedekind cuts or with specific equivalence classes of Cauchy sequences, showing the identification is one-to-one and respects all the operations (and is essentially unique), and so obtaining a "perfect copy" of $\mathbb{Q}$ inside of $\mathbb{R}$ (and by extension, perfect copies of $\mathbb{N}$ and of $\mathbb{Z}$ also sitting inside of $\mathbb{R}$).
You can keep going, of course: define $\mathbb{C}$ as the set of all pairs $\mathbb{R}\times\mathbb{R}$; then identify $\mathbb{R}$ with the pairs $\mathbb{R}\times\{0\}$, and you have a copy of $\mathbb{N}$ sitting inside a copy of $\mathbb{Z}$ sitting inside a copy of $\mathbb{Q}$ sitting inside a copy of $\mathbb{R}$ sitting inside $\mathbb{C}$. (And then you can stick $\mathbb{C}$ inside the quaternions, the quaternions inside the octonions).
So even though they are actually very different sets, we have copies of each sitting inside the "next one", copies that respect all the structures we are interested in, so we can still think of them as being "subsets".
You used to do that all the time without the formalism: we think of "fractions" as being made up of an integer, a solidus, and a nonzero integer, so that "$3$" is not a fraction; but when needed, we are perfectly happy writing "$3 = \frac{3}{1}$" and working with either version of $3$ (the integer, or the fraction) depending on context. |
checking if consistent estimator using LLN | Here are some quick hints:
I suppose you know that $E(1/\bar X_n) = n\theta/(n-1),$ so $T_n = 1/\bar X_n$ is a biased estimator of $\theta$. Of course, $n/(n-1) \rightarrow 1$ with increasing $n$, so it is asymptotically unbiased.
The LLN shows directly that $\bar X_n$ converges in distribution (or probability) to $\mu = 1/\theta.$ It seems possible, but a little messy, to deal with reciprocals here. (I have dim memories of working through something similar once, but
not recently, so no guarantee.)
Perhaps an approach similar to yours with Markov's inequality is easier to undertstand:
$$ 0 \leq P\{|(1/\bar X_n) - \theta| \ge \epsilon\} \le E[(1/\bar X_n) - \theta)]^2/\epsilon^2 = e_n.$$
Then show that $e_n \rightarrow 0$. The numerator of $e_n$ is the variance of $T_n =1/\bar X_n$ plus its squared bias and $T_n$ has an inverse gamma distribution with a known variance (e.g., Wikipedia). [Note, this may be a little round about because convergence in squared mean implies convergence in probability, but that is not what you asked.] |
Various definitions of group action | This isn't a complete answer, but I thought it would be useful to write it out in more detail than would fit in the space provided.
First, I'd like to observe that both of the equivalent definitions of group action admit generalisations:
The definition using a homomorphism $G \to \text{Aut}(X)$ generalises straightforwardly to the case when $G$ is a (discrete) group and $X$ a vector space, giving rise to the notion of linear representations of groups.
The definition using a map $G \times X \to X$ generalises straightforwardly to the case when $G$ is a topological group and $X$ a topological space, giving rise to the notion of continuous group actions on topological spaces.
Your question seems to be asking whether we can find a definition which would encompass both branches of generalisations. It's not an unreasonable question — after all, both of the examples above converge when we have a continuous linear representation of a topological group.
Let's try to formulate categorically the equivalence of the two definitions in $\textbf{Set}$, and see how things go when we try to change to a more general category. Firstly, we note that a small group is equivalently
a category $\mathcal{G}$ enriched over $\textbf{Set}$ with one object and all arrows invertible, and
a set $G$ equipped with maps $m : G \times G \to G$, $e : 1 \to G$, $i : G \to G$ satisfying the group axioms.
The connection between the two definitions is expressed by the equation $\mathcal{G}(*, *) = G$, where $*$ is the unique object in $\mathcal{G}$. Using the first definition, a group action of $G$ is simply any functor $\mathscr{F} : \mathcal{G} \to \textbf{Set}$. Focusing on the hom-sets, we see that we have a monoid homomorphism $G \to \textbf{Set}(X, X)$, where $X = \mathscr{F}(*)$, and since the domain is a group, the codomain must lie in $\text{Aut}(X) \subseteq \textbf{Set}(X, X)$. Thus we have the first definition of group action.
Now, we recall that $\textbf{Set}$ is a cartesian closed category (indeed, a topos), so in particular we have the exponential objects $Y^X$, which are defined by following the universal property: $\text{Hom}(Z \times X, Y) \cong \text{Hom}(Z, Y^X)$ naturally in $Z$ and $Y$. Thus, we may identify the map $G \to \textbf{Set}(X, X)$ with a map $G \times X \to X$, and translating the homomorphism axioms through this identification gives the second definition of group action.
Consider a group object $G$ in a cartesian monoidal category $(\mathcal{C}, \times, 1)$. This may be viewed as a category $\mathcal{G}$ enriched over $\mathcal{C}$. Then, an action of $G$ on an object $X$ in another category $\mathcal{D}$ enriched over $\mathcal{C}$ is simply a $\mathcal{C}$-enriched functor $\mathcal{G} \to \mathcal{D}$. If $\mathcal{D}$ is such that there is a $\mathcal{C}$-enriched "forgetful" functor $U : \mathcal{D} \to \mathcal{C}$ and $\mathcal{C}$ is a cartesian closed category, we may do the same trick as before and obtain an arrow $G \times X \to X$ in $\mathcal{C}$.
In particular, if $\mathcal{C}$ is a cartesian closed category, it is enriched over itself. Indeed, consider the counit $\epsilon_{Z,X} : Z^X \times X \to Z$ of the product-exponential adjunction. If we compose with $\text{id} \times \epsilon_{X,Y} : Z^X \times X^Y \times Y \to Z^X \times X$, we get a map $Z^X \times X^Y \times Y \to Z$, which we may take transpose to obtain a map $Z^X \times X^Y \to Z^Y$, which is the composition of arrows. Thus we may recover the notion of continuous group actions at least in the case where both the group and the space being acted on are compactly-generated Hausdorff spaces... |
Question about Improper Integrals | Probably tricky !
Let us consider $$I=\int_{-1}^0 \frac{e^x}{e^x-1} \,dx\qquad J= \int_{0}^1 \frac{e^x}{e^x-1} \,dx $$ For the first integral, let us substitute $x=-y$; this gives $$I=\int_{-1}^0 \frac{e^x}{e^x-1} \,dx=-\int_1^0\frac{e^{-y}}{e^{-y}-1}\,dy=\int_1^0\frac{dy}{e^y-1}=-\int_0^1\frac{dy}{e^y-1}=-\int_0^1\frac{dx}{e^x-1}$$ So $$I+J=\int_0^1 dx=1$$
We should get the same result using Pricipal Value integral. |
Why is there no analogy of completing the square for quartics and higher? | The direct analog for higher degrees is depressed polynomials.
For a polynomial $\;a_n x^n + a_{n-1} x^{n-1} + \cdots + a_0\;$ the Tschirnhausen substitution $y = x + \frac{a_{n-1}}{n \, a_n}$ eliminates the term in the second highest power. In the case of a quadratic, that leaves just the square term, and a constant term. |
How does De Moivre work, when $n$ is even? | You should only have $r=1$. The co-ordinates in polar form will always have non-negative length; in your case, $r=-1$ is accounted for by one of the values of $\theta$. |
Proving ($\leftarrow$ direction) that a sequence converges iff it is eventually constant | To show convergence to a limit $L$, you need to show that: given any definition of "close enough" (given $\epsilon>0$), eventually (there exists $N\in\mathbb{N}$ so that for $n>N$) your sequence is "close enough" to the limit ($\delta(p_n, L)<\epsilon$).
In this case: if your sequence is always equal to $p$ for $n$ sufficiently large, then $p$ should certainly be the limit. And conveniently, we know that $\delta(p, p)=0$.
So, let any $\epsilon>0$ be given. We know that for $n>N$, $p_n=p$; so, for all $n>N$, what can you say about $\delta(p_n, p)$? |