title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Integrable subbundle
It is not necessarily closed. For example, take $M=\mathbb R^3$ with coordinates $(x,y,z)$, and let $D$ be the subbundle spanned everywhere by $\partial/\partial z$. We could take $f_1 = dx$ and $f_2 = (z^2 +1) dy$, and then $f_1\wedge f_2$ is not closed. What is true is that in a neighborhood of each point, it is always possible to find $1$-forms $f_1,\dots,f_k$ whose wedge product is closed. In fact, it's always possible to have each $f_i$ individually closed. This follows from the Frobenius theorem -- in a neighborhood of each point, there are coordinates $(x^1,\dots,x^n)$ such that $D$ is annihilated by $dx^1,\dots,dx^k$.
Check convergence /divergence of a series $\sum_{n=1}^{\infty} (-1)^{n+1}(n)^{\frac{1}{4}}(\frac{1}{\sqrt{4n-1}}- \frac{1}{\sqrt{4n}} )$
Answer to first series: $\frac 1 {\sqrt {(4n-1)}}- \frac 1 {\sqrt {4n}} =\frac {\sqrt {4n}-\sqrt {4n-1}} {\sqrt {4n-1} \sqrt {4n}}$. Write this as $\frac 1 {\sqrt {4n-1} \sqrt {4n}(\sqrt {4n}+\sqrt {4n-1})}$. Note that the denominator is of the order of $n^{3/2}$. Apply comparison test now (Note that $\frac 3 2-\frac 1 4=\frac 5 4 >1)$. Hence the first series is absolutely convergent. A similar argument shows that the second series is not absolutely convergent. Try to prove its convergence using Alternating Series Test.
Convex function parameter
$1)$ for $x\in(0,1/3)$ then $$f(x)= x^2+a(3x-1)+4\to f'(x)=2x+3a$$ $2)$ for $x\in(1/3,1)$ then $$f(x)= x^2-a(3x-1)+4\to f'(x)=2x-3a$$ the intersection beetween $(1)$ and $(2)$ happens at $x_i=1/3$ look that $$\lim_{x\to 1/3^{-}} f'(x)=\frac{2}{3}+3a$$ $$\lim_{x\to 1/3^{+}} f'(x)=\frac{2}{3}-3a$$ one condition for guarentee convexity is $f'(x)$ increasing around the intersection $1/3$, not necessarily continuous. Here we are taking advantage that $f$ is a quadratic function, per parts. So, $$\lim_{x\to 1/3^{-}} f'(x)\le \lim_{x\to 1/3^{+}} f'(x)$$ $$\frac{2}{3}+3a\le \frac{2}{3}-3a\to a\le 0$$
$\{x_n\}$ is a bounded above sequence such that $x_{n+1} - x_n \ge a_n$, where $\sum a_k$ converges. Prove $x_n$ converges.
Your proof is correct, nicely done! Just as an illsutration of how $\limsup$ and $\liminf$ can shorten such arguments: As $x_n\geq x_m+\sum_{k=m}^n \alpha_k$ for all $m\leq n$ we have $$ \liminf_{n\to\infty}x_n\geq \liminf_{n\to\infty}\left(x_m+\sum_{k=m}^n \alpha_k\right)=x_m+\sum_{k=m}^\infty \alpha_k $$ and then taking $\limsup_{m\to\infty}$ on the right side we get $$ \liminf_{n\to\infty}x_n\geq\limsup_{m\to\infty}\left(x_m+\sum_{k=m}^\infty \alpha_k \right)=\limsup_{m\to\infty}x_m+\lim_{m\to\infty}\sum_{k=m}^\infty \alpha_k=\limsup_{m\to\infty}x_m $$ so $\{x_n\}$ converges.
Linear Maps between the $L^1$-spaces of two singular measures
Here is an example where $M$ exists. Let $(E,\mathcal E)$ denote any measurable space and $\mu$ and $\pi$ two probability measures on $(E,\mathcal E)$. Let $(\Omega,\Sigma)=(E\times E,\mathcal E\otimes\mathcal E)$, $\nu_1=\mu\times\pi$ and $\nu_2=\pi\times\mu$. Then $M(f):(x,y)\mapsto f(y,x)$ defines a suitable linear map $M:L^1(\nu_1)\to L^1(\nu_2)$. Note that if $\mu$ and $\pi$ are singular to each other, such are $\nu_1$ and $\nu_2$. But I guess the real question is to show that $M$ may not exist, or that $M$ always exists... Edit: As mentioned on MO, $M=L_{\nu_1}$ always works, since this $M$ sends functions to constants and $L_{\nu_2}(c)=c$ for every constant function $c$.
If $A \in M_5(\mathbb R)$ and $A^2-4A-I=0$ find $(a_1-\frac{1}{a_1})+\ldots+(a_5-\frac{1}{a_5})$
Assuming $A$ is invertible, we may write $$ A -4I -A^{-1} = 0 $$ Now take the trace on both sides. We get $$ \operatorname{tr}(A) - 4\operatorname{tr}(I) - \operatorname{tr}(A^{-1}) = 0 $$ Now, we know two things: The trace of a matrix is the sum of its eigenvalues The eigenvalues of $A^{-1}$ are the reciprocals of the eigenvalues of $A$ Also, since these matrices are $5\times 5$, we have $\operatorname{tr}(I) = 5$. This gives $$ a_1+a_2+a_3+a_4+a_5 - 4\cdot 5 - \frac1{a_1}- \frac1{a_2}- \frac1{a_3}- \frac1{a_4}- \frac1{a_5} = 0 $$ from which you can easily extract the answer you're looking for.
Proving Ackermann's function is decidable through a Turing Machine
I don't know whether your code is correct or not (I've only glanced at it, and it's fairly early in the morning), but it's very high-level. "Run a Turing machine" is not a primitive operation in a Turing machine. If you're happy to express it in a Turing-equivalent language, rather than in a Turing machine itself, then your task is much easier. Instead, you could implement a stack, and push $m, n-1$ to the stack; then set your TM up so that at the start of execution, it seeks to the top of the stack, then reads the top two numbers from the stack. On completion, the TM should write its output to the tape, replacing the top two numbers of the stack. Then the operation "run a copy of myself" is just "run myself from the beginning" (or goto start state), because the TM will only consume the top two values and replace them with the output. You're reading $m$ and $n$ at the start, so the initial value on the tape should be $\mathrm{list}(m, n)$, where $\mathrm{list}$ is some function that expresses arbitrary lists of naturals as a single natural. (Then your TM should contain code to interpret that initial value correctly as a list.) Do remember that you need to prove that your TM halts, in order to show that Ack is decidable. For that, you'll need structural induction on $\mathbb{N}^2$ in lexicographic order.
General Topology: Is there a surjective continuous or homeomorphism from $(0,1)^{\omega}$ in uniform Topology to $R^{\omega}$ in uniform topology?
The power $\mathbb{R}^\omega$ is not connected: define an equivalence relation on it by $x\sim y$ iff $\sup_n|x_n-y_n|$ is finite. The equivalence classes are all open, and hence the set of bounded sequences is both open and closed. The power $(0,1)^\omega$ is connected, even path-connected: given $x$ and $y$ the map $t\mapsto x+t(y-x)$ is continuous.
How does this simplification of this derivative work?
You applied the product rule incorrectly. You should have $$ (uv)'=u'v+uv' $$ where $u=r^2$ and $v=d\theta/dt$. Note that $$ u'=2r r'\quad v'=\frac{d^2\theta}{dt^2} $$ where $\displaystyle(\cdot)'=\frac{d(\cdot)}{dt}$. Notes. \begin{align} \frac{1}{2r}\frac{d}{dt}\left(r^{2}\left(\frac{d\theta}{dt}\right)\right) &=\frac{1}{2r}(2r\frac{dr}{dt}\frac{d\theta}{dt}+r^2\frac{d^2\theta}{dt^2})\\ &=\frac{dr}{dt}\frac{d\theta}{dt}+\frac{r}{2}\frac{d^2\theta}{dt^2} \end{align}
Combination on grouping
I suspect this is a question where you need to know more about baseball rather than more about combinatorics! Does a "baseball nine" have some restrictions on how many of each type of player there can be? I don't understand baseball, but have googled it. (If someone more knowledgeable is reading, please let me know whether what follows is correct.) Based on that I assume a "nine" is a set of players capable of filling the nine positions: one catcher, one pitcher, four different infield positions and three different outfield positions. I don't know whether you're supposed to interpret two "nines" as being the same if they consist of the same nine players, or whether every player has to be in the same position. Based on this, can you see how many "nines" there are under the two interpretations: same nine players, i.e. order within the two larger groups doesn't matter; and same positions, i.e. order does matter?
Bernoulli measure is aperiodic
Let $(x_i)_{i \geq 1}$ be a countable subset of $X$ and let $$B_{i,k} = \{x \in X \mid (x)_{[-k,k]} = (x_{i})_{[-k,k]}\}$$ be the cylinder set of length $k$ which contains $x_i$. Show that $$B(n) := \bigcup_{i=0}^\infty B_{i,n2^i}$$ is a nested sequences of subsets that contains every $x_i$ for any value of $n$ and then show that the measure of $B(n)$ converges to $0$ as $n$ increases. Hence, what must the measure of $\{x_i \mid i \geq 0\}$ be?
Limit of $\prod_{k=1}^n \frac{(k^2)}{(k+1)^2}$
HINT Note that $$\prod_{i=1}^n \frac{(i^2)}{(i+1)^2}=\frac 1 {2^2}\cdot \frac {2^2} {3^2}\cdot \frac {3^2} {4^2}\cdot...\cdot \frac {(n-1)^2} {n^2}\cdot \frac {n^2} {n+1^2}=\frac 1 {(n+1)^2}$$
$p+\sqrt{q}=r+\sqrt{s} \qquad p, q, r, s \in \mathbb{Q}$
Let $k=p-r$. Then $k\in\mathbb{Q}$ and $k+\sqrt{q}=\sqrt{s}$. \begin{align*} k^2+2k\sqrt{q}+q&=s\\ \end{align*} If $k\ne 0$, then $\displaystyle \sqrt{q}=\frac{s-q-k^2}{2k}\in\mathbb{Q}$ and $\sqrt{s}=k+\sqrt{q}\in\mathbb{Q}$. If $k=0$, then $p=r$ and $\sqrt{q}=\sqrt{s}$. Hence, $q=s$.
If $T:V \to V$ and $T$ is surjective, can i say that $T$ is diagonaizable?
No, you cannot, e.g. the map $\mathbb{R}^2 \to \mathbb{R}^2$ given by $$ \begin{pmatrix} 1& 1 \\ 0 & 1 \end{pmatrix} $$ is surjective but not diagonalizable.
Prove that this function is differentiable
Since $f$ is Lipschitz on $[a,b],$ $f$ is absolutely continuous on $[a,b].$ Thus $f'(x)$ exists a.e. in $[a,b],$ $f'\in L^1[a,b],$ and $$f(x) = f(a) + \int_a^xf'(t)\,dt, \,\,x\in [a,b].$$ Since the assumption in this problem gives $f'(x) = 0$ wherever $f'(x)$ exists, the above integral is $0$ for all $x\in [a,b].$ Thus $f$ is constant on $[a,b],$ and since $[a,b]$ is arbitrary, we have $f$ constant on $\mathbb R$ as desired.
Ordered $k$-covers of $[n]$
Given Dome's comment of 09/17/13 I interpret the question as follows: We have to count the number of $k$-tuples $(A_1,A_2,\ldots,A_k)$ of subsets $A_i\subset[n]$ having the property $\bigcup_{1\leq i\leq k}A_i=[n]$. This means that for each number $\ell\in[n]$ we can freely decide in which of the $A_i$, $\>1\leq i\leq k$, it shall occur, under the sole condition that it occurs in at least one of the $A_i$. In other words: We have to select for each $\ell\in[n]$ a nonempty subset $J_\ell\subset[k]$. When these sets $J_\ell$ have been selected put $$A_i:=\bigl\{\ell\in[n]\ \bigm|\ i\in J_\ell\bigr\}\qquad(1\leq i\leq k)\ .$$ There are $2^k-1$ nonempty subsets of $[k]$, and we can select one of them independently for each $\ell\in[n]$. Therefore the total number $N$ of admissible $k$-tuples $(A_1,A_2,\ldots,A_k)$ is given by $$N=\left(2^k-1\right)^n\ .$$
Linear Transformations in Linear Alg
Notice that the input vectors are multiples of each other and that the output vectors are multiples of each other. Since those multiples are not the same, such a transform doesn't exist.
What's wrong with this prove?
You have defined $b_1 = (v+w)\|v\|$ and $b_2 = (v-w)\|v\|$ in the first two lines. Thus, we have $$ b_1 \cdot b_2 = [\|v\|(v+w)] \cdot [\|v\|(v-w)] $$ which will only be equal to $(v+w)\cdot(v-w)$ if $\|v\| = 1$. Incidentally, I suspect that if you had simply written (v + w) • (v - w)= v•v - v•w + v•w - w•w =v•v - w•w= ||v||² - ||w||² = ||v||² - ||v||² = 0 and wrote "two vectors are orthogonal if their dot product is zero", then you would have received full credit.
Generalization of normal subgroups and ideals
Is there any way to formalise this concept of a subset of an algebra that provides a partition making use of some operation, being this equivalence relation a congruence? Here is the way it has been done. If A is an algebra and $\theta$ is a congruence, then $\theta$ is called regular if it is generated as a congruence by any one of its classes. That is, if $C$ is any $\theta$-class, then $\theta$ is the least congruence containing $C\times C$. A is congruence regular if all of its congruences are regular. A variety of algebras is congruence regular if all of its members are. Notes. (1) The term regular was introduced by Mal'cev in A.I. Mal'cev, On the general theory of algebraic systems, Mat. Sb., 35 (77) (1954), pp. 3-20. (2) Congruence regular varieties were characterized by Csakany in B. Csakany, Characterization of regular varieties, Acta Sci. Math. (Szeged), 31 (1970), pp. 187-189. (3) In unpublished notes from the 1970's, J. Hagemann proved that a congruence regular variety must be congruence modular and congruence $n$-permutable for some $n$. It is a consequence of this that any variety containing an algebra that has a non-discrete compatible partial order must fail to be congruence regular. So, for example, any unary variety, the variety of all semigroups, any variety of lattices, ETC, will fail to be congruence regular.
Complementary of a preorder relation
First let me make clear that the formula in the question serves as the definition of $\overline R$. In particular, other than you thought, this overline notation does not denote a complement. As an example, consider the relation "$xRy \iff$ person $x$ is at most as tall as person $y$" with the following people: $a=\text{Adam}$, $170~\rm cm$ $b=\text{Barbara}$, $180~\rm cm$ $c=\text{Charles}$, $180~\rm cm$ $d=\text{Doris}$, $190~\rm cm$ We obviously have $aRa,aRb,aRc,aRd,bRb,bRc,bRd,cRb,cRc,cRd,dRd$. Now $\rho_r = R \cap R^{-1}$ is the equivalence relation "$x$ is exactly as tall as $y$", with the three equivalence classes $[a]_{\rho_R} = \{a\}$ $[b]_{\rho_R} = [c]_{\rho_R} = \{b,c\}$ $[d]_{\rho_R} = \{d\}$ Now using the definition of $\overline R$, we get $[a]_{\rho_R} \overline R [a]_{\rho_R}$ because $aRa$. $[a]_{\rho_R} \overline R [b]_{\rho_R}$ because $aRb$. $[a]_{\rho_R} \overline R [d]_{\rho_R}$ because $aRd$. $[b]_{\rho_R} \overline R [b]_{\rho_R}$ because $bRb$. $[b]_{\rho_R} \overline R [d]_{\rho_R}$ because $bRd$. $[d]_{\rho_R} \overline R [d]_{\rho_R}$ because $dRd$. This is obviously a partial order (indeed, in this case even a total order) of the three equivalence classes. In the general case, you can see that $\overline R$ is indeed a partial order as follows: $\overline R$ is actually well defined (this has to be checked!): If $[a]_{\rho_R} = [b]_{\rho_R}$ and $[c]_{\rho_R} = [d]_{\rho_R}$ then we have to show that $[a]_{\rho_R} \overline R [c]_{\rho_R}$ iff $[b]_{\rho_R} \overline R [d]_{\rho_R}$. If $[a]_{\rho_R} = [b]_{\rho_R}$ then we have both $aRb$ and $bRa$, and from $[c]_{\rho_R} = [d]_{\rho_R}$ we get $cRd$ and $dRc$. Now by definition of $\overline R$, $[a]_{\rho_R} \overline R [c]_{\rho_R}$ iff $aRc$. Now thanks to transitivity of $R$, we get from $bRa$ and $aRc$ that $bRc$, and then with $cRd$ that $bRd$. The direction $bRd \implies aRc$ works analogously. But by definition of $\overline R$, $bRd \iff [b]_{\rho_R} \overline R [d]_{\rho_R}$. Reflexivity: Due to reflexivity of $a$, we have $aRa$, and thus by definition of $\overline R$ we get $[a]_{\rho_R}\overline R [a]_{\rho_R}$. Transitivity: If $[a]_{\rho_R} \overline R [b]_{\rho_R}$ and $[b]_{\rho_R} \overline R [c]_{\rho_R}$, then we have by the definition of $\overline R$ that $aRb$ and $bRc$, and thus by transitivity of $R$ we have $aRc$ which by definition of $\overline R$ means $[a]_{\rho_R} \overline R [c]_{\rho_R}$. Antisymmetry: If $[a]_{\rho_R} \overline R [b]_{\rho_R}$ and $[b]_{\rho_R} \overline R [a]_{\rho_R}$, then by definition of $\overline R$ we have $aRb$ and $bRa$, but that means that $a$ and $b$ are equivalent according to $\rho_R$, thus $[a]_{\rho_R} = [b]_{\rho_R}$.
Computing the order of a particular group from its presentation
Your group is a special case of the dicyclic group (replace $n$ by $2^{m-2}$): $$G=\langle a,b \mid a^{2n} = 1 , x^2 = a^n, xax^{-1}=a^{-1} \rangle$$ You already shown that $|G|\leq 4n$. It suffices to find a group $K$ with order $4n$ which satisfies the above relations: the subgroup of $\mathbb{H}^{\times}$ (non-zero quaternion under multiplication) generated by $a=e^{i\pi/n}, b=j$ does the job. It is not hard to verify $|K| = 4n$.
Prove that $x^4 + y^4 - 3xy = 2$ is compact
Note that $$\eqalign{x^4+y^4-2xy&={1\over2}(x^2+y^2)^2+{1\over2}(x^2-y^2)^2-2xy\geq{1\over2}(x^2+y^2)^2-(x^2+y^2)\cr &=(x^2+y^2)^2\left({1\over2}-{1\over x^2+y^2}\right)\geq{1\over4}(x^2+y^2)^2\geq4\ ,\cr}$$ as soon as $x^2+y^2\geq4$. It follows that the constraint defines a closed and bounded, hence compact, set in the plane.
What is the formal adjoint?
I assume we work with real-valued functions. If $L = \sum_{\alpha} k_{\alpha} D^{\alpha}$ (using multi-index notation), where $k_{\alpha}$ are constants, then $L^{*}$ is given by $$ L^{*} = \sum_{\alpha} k_{\alpha} (-1)^{|\alpha|} D^{\alpha}. $$ To see why it makes sense, one may check that for $\phi, \psi \in C_0^{\infty}$ equality $\langle L \phi, \psi \rangle = \langle \phi, L^{*} \psi \rangle$ is just integration by parts.
Transformation of the limits in an integral (From Probability)
You can (and should) avoid writing integrals altogether because you don't know if the random variable $X$ has a density. Imitating your steps yields $$E e^{t(X-\mu)/\sigma)} = e^{-t \mu / \sigma} E[e^{(t/\sigma)X}] = e^{-t \mu / \sigma} M(t/\sigma)$$ as long as $M(t/\sigma)$ is defined. Since $M(\cdot)$ is only defined on the interval $(-h, h)$, we must have $-h < t/\sigma < h$.
question on fibred products
A morphism into $A\times_B(B\times_CD)$ is equivalent to a morphism into $A$ and a morphism into $B\times_CD$ such that the corresponding diagram commutes. But a morphism into $B\times_CD$, in turn, is equivalent to a morphism into $B$ and a morphism into $D$ which make another diagram commute. So all in all a morphism into $A\times_B(B\times_CD)$ is equivalent to three morphisms - into $A,B,D$ - that fit into a commutative diagram. Note that since there is a given morphism $A\to B$, once we have a morphism into $A$ we automatically get a morphism into $B$ by composition. Thus the three above morphisms are equivalent to a pair of morphisms - into $A,D$ - that commute over $C$. In conclusion, $A\times_B(B\times_CD)\cong A\times_CD.$
System of equations from weighted Gaussian Quadrature
Considering the equations$$\left\{ \begin{array}{l} 1 = a_0 + a_1 \\ \frac{1}{4} = a_0 x_0 + a_1 x_1 \\ \frac{1}{9} = a_0 x_0^2 + a_1 x_1^2 \\ \frac{1}{16} = a_0 x_0^3 + a_1 x_1 ^3 \\ \end{array} \right.$$ use the first and second to eliminate $a_0$ and $a_1$ as functions of $x_0$ and $x_1$; this leads to $$a_0=-\frac{4 {x_1}-1}{4 ({x_0}-{x_1})}$$ $$a_1=1-\frac{4 {x_1}-1}{4 ({x_0}-{x_1})}$$ Plug these into the third equation which becomes $$\frac{1}{36} ({x_0} (9-36 {x_1})+9 {x_1}-4)=0$$ from which $x_0$ can be eliminated $$x_0=\frac{9 {x_1}-4}{9 (4 {x_1}-1)}$$ Now, the fourth equation becomes $$\frac{36 {x_1} (7 {x_1}-5)+17}{1296 (4 {x_1}-1)}=0$$ that is to say $$252 {x_1}^2-180 {x_1}+17=0$$ for which the roots are $$\frac{1}{42} \left(15 \pm\sqrt{106}\right)$$ I am sure that you can take from here and finally get $$\left\{a_0=\frac{1}{2}-\frac{9}{4 \sqrt{106}},a_1=\frac{1}{2}+\frac{9}{4 \sqrt{106}},x_0=\frac{1}{42} \left(15+\sqrt{106}\right),x_1=\frac{1}{42} \left(15-\sqrt{106}\right)\right\}$$ $$\left\{a_0=\frac{1}{2}+\frac{9}{4 \sqrt{106}},a_1=\frac{1}{2}-\frac{9}{4 \sqrt{106}},x_0=\frac{1}{42} \left(15-\sqrt{106}\right),x_1=\frac{1}{42} \left(15+\sqrt{106}\right)\right\}$$
Boundary regularity for the p-Laplace equation
I'm not sure what your assumptions are, but I assume $p$ is equal to the dimension of the domain. In the special case $f \equiv 0$ the continuity of $u$ can be easily shown. Look up Peter Lindqvist's Notes on the $p$-Laplace equation, section 3.2. The proof can be adjusted to work in the case of sufficiently regular nonzero right-hand side $f$. As for the continuity-up-to-the boundary, this obviously depends on the boundary data. If one solves the equation with non-continuous boundary data (chosen in the trace space for $W^{1,n}$), then the solution is not in $C(\overline{\Omega})$.
Using the Banach fixed point Theorem
There are some typos in your derivation. Specifically the correct form of $(2)$ is: $$u=(I+\lambda_0 A)^{-1} \left[\frac{\lambda_0}\lambda f\right]+(I+\lambda_0 A)^{-1}\left[(1-\frac{\lambda_0}\lambda)u\right].$$ Your equation is of the form $$u=v + Bu$$ for a constant $v$ and a linear map $B$. In the case that $B$ is a contraction then $v+Bu$ is a contraction and you are done. So why is $B=(1-\frac{\lambda_0}\lambda)(I+\lambda_0A)^{-1}$ a contraction? Here note that $\|(I+\lambda_0 A)^{-1}\|≤1$, use the positivity of $A$ if you wish. This gives you $$\|B\|≤|1-\frac{\lambda_0}\lambda|<1,$$ now you are finished.
Understanding if $\sin x = t$ then $\cos x dx = dt$
$$t=\sin\ x \implies t+dt=\sin(x+ dx)=\sin x\cos dx+\cos x \sin dx$$ $$dx \approx 0\implies \sin dx \approx dx \ \text{ and } \ \ \cos dx \approx 1$$ Here the property ($ \sin x \approx x \text{ when } x\approx 0$) was applied, hence $\sin dx \approx dx$ $$t+dt=\sin x\cos dx+\cos x \sin dx = \sin x + \cos x \cdot dx$$ $$\text{$\sin x =t$ so $t+dt= t+ \cos x \cdot dx\implies dt = \cos x\cdot dx$}$$ This is the logic behind it but to not disturb yourself in the future, multiply both sides by $dx$, $$\cfrac{df}{dx} = g(x)\implies df=g(x)\cdot dx$$
Double Sum of symmetric expression
In general, with OP's notation: $$\sum_{i=1}^n \sum_{j=1}^n f(a_i,a_j)=\sum_{i<j=1}^n f(a_i,a_j)+\sum_{i=1}^n f(a_i,a_i)+\sum_{i>j=1}^n f(a_i,a_j) \tag{1}$$ $\sum_{i<j=1}^n (a_i - a_j)^2 = \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n (a_i - a_j)^2$ And this seems to work. In this case $\,f(a_i,a_j)=(a_i-a_j)^2\,$, therefore $\,f(a_i,a_j)=f(a_j,a_i)\,$ and $\,f(a_i,a_i)=0\,$, so the above follows directly from $(1)\,$. Nevertheless for the symmetric case this doesn't work. In this case $\,f(a_i,a_j)=(a_i+a_j)^2\,$, therefore $\,f(a_i,a_j)=f(a_j,a_i)\,$ and $\,f(a_i,a_i)=4a_i^2\,$, so: $$\sum_{i=1}^n \sum_{j=1}^n (a_i+a_j)^2 = 2 \sum_{i<j=1}^n (a_i+a_j)^2 + 4 \sum_{i=1}^n a_i^2 \\ \quad \iff \quad \sum_{i<j=1}^n (a_i+a_j)^2 = \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n (a_i+a_j)^2 - 2 \sum_{i=1}^n a_i^2$$
Poisson Process - calculation of time
For a Poisson process, the interarrival time is exponentially distributed with expected value $\frac{1}{\mu}$. However, since only $0.3$ are women, the mean waiting time for a woman to join is $\frac{1}{0.3\mu}$. Let $T_1$ be the time for the first woman to arrive, $T_2$ the section, etc. Then we want the expected value of $T_1+T_2+T_3$ = $\frac{1}{.1\,\mu}=\frac{10}{5}=2$ months
Is there a simple proof for ${\small 2}\frac{n}{3}$ is not an integer when $\frac{n}{3}$ is not an integer?
Simply notice that $\rm\displaystyle\ \frac{n}3\ +\ \frac{2\:n}3\ =\ n\in \mathbb Z\ \ $ therefore $\rm\displaystyle\ \frac{n}3\in\mathbb Z\ \iff\ \frac{2\:n}3\in \mathbb Z$ This is true precisely because $\rm\:\mathbb Z\:$ is an additive subgroup of $\rm\:\mathbb Q\:,\:$ i.e. a subset closed under subtraction. For if $\rm\:S\:$ is a subgroup of a group and $\rm\ a+b\ = s \in S\ $ then $\rm\ a = b-s \in S\iff\ a+s = b\in S\:,\ $ so your property holds. Conversely if your property holds and $\rm\:a,b\in S\ $ then since $\rm\ (a-b)+b = a \in S\ $ the property implies that $\rm\: a-b\in S\:,\: $ so $\rm\:S\:$ is closed under subtraction, so $\rm\:S\:$ is a subgroup (or empty). See also this complementary form of the subgroup property from my prior post. THEOREM $\ $ A nonempty subset $\rm\:S\:$ of abelian group $\rm\:G\:$ comprises a subgroup $\rm\iff\ S\ + \ \bar S\ =\ \bar S\ $ where $\rm\: \bar S\:$ is the complement of $\rm\:S\:$ in $\rm\:G$ Instances of this are ubiquitous in concrete number systems, e.g. transcendental algebraic * nonalgebraic = nonalgebraic if nonzero rational * irrrational = irrational if nonzero real * nonreal = nonreal if nonzero even + odd = odd additive example integer + noninteger = noninteger
The level surface of the function $f(x,y,z) = (x^2+y^2)^{-1/2}$ are...
The answer is c because the function only depends on $x$ and $y$. You can move freely on the $z$ direction without changing the value of $f$.
Is $(-3n,3n)$ a subcover of $(-n,n)$?
Any set in $C'$ is in $C$: simply let $A \in C'$, take $n$ such that $A=(-3n,3n)$. Now take $m=3n$ to see that $(-m,m) \in C$. So $C' \subset C$.
Let p be a prime number. Prove that the equation $2/p=1/x+1/y$ has one solution with integers $0<x<y$.
As in a comment above, we must have $p \neq 2.$ $2x&gt;p, 2y&gt;p.$ Multiply through by $2pxy$ to get $4xy-2px-2py =0,$ and $$ (2x-p)(2y-p) = p^2 $$ with $y &gt; x,$ we get $1 \cdot p^2 = p^2$ and $x = \frac{p+1}{2}$ and $y = \frac{p^2 + p}{2}$
Are there further primes of the form $\varphi(n)^{\varphi(\varphi(n))}+1$?
This is mostly just a summary of what I have found that is to big for a comment. Since $\varphi(\varphi(n))$ must be a power of $2$, $\varphi(n)=2^m p_1 p_2 p_3...p_m$. Where each of the $p_i$ is a distinct Fermat prime. Thus we have $$\varphi(n)^{\varphi(\varphi(n))}+1=(2^mp_1p_2p_3...p_m)^{2^r}+1.\tag{1}$$ We also have that $$n=2^uq_1q_2...q_sp_1^{e_1}p_2^{e_2}...p_m^{e_m}$$ where the $q_i$ are primes of the form $2^dp_i+1$, and the $e_i$ are each $0,1$ or $2$. If a $q_i$ is present in the factorization, then $e_i$ is at most $1$. If we want to answer your question, it will probably be easiest to work with the right side of $(1)$.
How prove this sequence $u_{m}=v_{m}$
Lemma 1 : $ u_k = 1 + \sum a_i + \sum a_i a_{i-2} + \sum a_i a_{i-2} a_{i-4} + \ldots $ This is obvious by induction on $u_k$. Note that $u_k$ are independent of $m$. $_\square$ Note: 1 is the "sum of the empty product", IE $1 = \sum {\text{( product of 0 terms )} } $. This is the algebraic explanation behind why the formula works, though you can verify it by expanding too. Lemma 2 : Fix $m$. The sequence of $v_k$ obtained from $a_1, a_2, \ldots a_{m-1}$ is equal to the sequence of $u^* _k$ obtained from setting $a^*_1 = a_{m-1}, a^* _2 = a_{m-2}, \ldots , a^*_{m-1} = a_1$. This is obvious from the recurrence definition. Apply your favorite symmetry argument. $_\square$ Lemma 3: Fix $m$. $u_m = v_m$. Use Lemma 2 to calculate $v_m$ via Lemma 1. Compare it to $u_m$ via Lemma 1. $_\square$
Preimages of coprime ideals
Yes. If $S=R/K$, then $I=I'/K$ and $J=J'/K$. If $I', J'$ are not coprime, then there is a prime ideal $P$ such that $I'+J'\subseteq P$, and thus $I+J\subseteq P/K$. (Note that $P\supseteq K$ since $I',J'\supseteq K$.)
Can the closure of a countable set be characterized sequentially?
No, we cannot say this. A somewhat practical example of this coming up is if we take the space $(L_{\infty}(\mathbb R))_1^{*}$ to be the space of linear functions $F:L_{\infty}\rightarrow\mathbb R$ such that $F(f)\leq \|f\|_{\infty}$ under the weak-* topology. This space is compact by the Banach-Alaoglu theorem. Define a sequence of functions $F_n(f)=\int_{n}^{n+1} f$. Note that this sequence has no limit points; in particular, if $L_{s_n}$ is a subsequence, define $$f_s(x)=\begin{cases}1 &amp; \text{if }s_{2n}\leq x &lt;s_{2n}+1 \text{ for some n}\\ 0 &amp; \text{otherwise}.\end{cases}$$ Then, $F_{s_n}(f_s)$ oscillates between $1$ and $0$, hence fails to converge in the weak-* topology. From this, we see that the sequential closure of $\{F_n\}$ is itself. However, the set of $\{F_n\}$ is not closed, because if it were, it would be compact. This is clearly not the case, as it is a countable discrete space.
In a principal ideal domain, gcd(a,b) always exists and can always be expressed as $xa +yb$ with some $x, y /in R$
Consider the ideal $I := \{ xa + yb \mid x, y \in R \}$ (why is this an ideal?). Since it is principal, it is generated by some element, say $r \in I$. Show that $r$ is a gcd of $a$ and $b$.
Weyl group of this root system is $S_n$?
The root system is a subset of $R^n$. The simple reflections $s_{\alpha_i}$ where $\alpha_i=e_i- e_{i+1}$ generate the Weyl group. They correspond to the transpositions $(i, i+1)$ $i=1,2, \cdots, n-1$ in $S_n$ which generate $S_n$.
Let L be any non-empty language over an alphabet Σ. Show that $L^2$ ⊆ $L^3$ if and only if λ ∈ L.
As Orest mentioned in the comments, $$\forall w_3 \in L^3 \land \forall w_2 \in L^2, n_0(w_3) &gt; n_0(w_2)$$ is false. To give a correct proof of this direction, instead take $w_1 \in L$ to be a word of minimal length and show the concatenation $w_1w_1$ is in $L^2$ but not in $L^3$ For the other direction, assume $\lambda \in L$ and concatenate any word in $L^2$ with $\lambda$ to find a word in $L^3.$
Number Game: 31 - Winning Strategy?
Most likely it was row $3$: you probably started by taking a $3$, and then he took nothing but $4$’s, to which you responded with $3$’s to hit the ‘magic numbers’. Unfortunately, after four moves apiece the total was at $28$, and you’d used all the $3$’s. Your strategy would have been perfect had there been at least five of each number instead of just four. You can make the same idea work for you as first player if you start by taking $5$. If the second player takes another $5$ to bring the total to $10$, you take $2$. If he then takes another $5$ to make $17$, you take $2$ again. In order to make $24$, he has to take the last $5$. You then take another $2$; there are no $5$’s left, so he can’t hit $31$. No matter whether he takes $1,2,3$, or $4$, you can reach $31$; the closest call is when he takes a $3$, but there’s still one $2$ left, so you still win. If the second player doesn’t take $5$ on his first turn, there are two cases. Case 1: He takes $1,2,3$, or $4$, making a total less than $10$. In this case you bring the total to $10$ on your next turn by taking $4,3,2$, or $1$, respectively. At this point there are still at least three of every number left, and only three more numbers to hit ($17,24,31$), so he can’t force you to use up what you need. Case 2: He takes a $6$, making a total of $11$. Go ahead and take a $6$, making $17$. The worst he can do to you is take a $1$, forcing you to take the third $6$, but that brings the total to $24$, and even if he takes $1$ again, there’s still a $6$ that you can take to reach $31$. In other words, if he doesn’t respond to your opening $5$ by taking a $5$ himself, you just play the ‘magic number’ strategy.
Zero-Diagonal Matrix and Positive Definitness?
The answer is negative and actually even more is true: the matrix cannot be positive definite if there is at least one diagonal element which is equal to $0$. By definition, an $n\times n$ symmetric real matrix $A$ is positive definite if $$ x^TAx&gt;0 $$ for all non-zero $x\in\mathbb R$. Suppose that $a_{ii}=0$ for some $i=1,\ldots,n$, where $a_{ii}$ denotes the $i$-th element on the diagonal of $A$. Suppose that all entries of $x\in\mathbb R^n$ are equal to $0$ except the $i$-th entry which is not equal to $0$. Such an $x$ is hence a non-zero vector since there is one entry which is not equal to $0$. We have that $$ x^TAx=a_{ii}x_i^2=0 $$ for all non-zero $x_i\in\mathbb R$ since $a_{ii}=0$. If follows that the matrix $A$ is not positive definite. Of course a matrix with a diagonal entry equal to $0$ can still be positive semi-definite. I hope this helps.
How would you integrate a function containing a definite integral (without calculating the integral)?
If we first denote the integral function of the inner function as $$ G(x) = \int_0^x e^{-u^2} \ du$$ Now by definition of the integral function we have $$ G'(x) = e^{-x^2}$$ Now if we consider your function we have $$ f(x) = (G(x))^2$$ The derivative is found with the chain rule $$ f'(x) = 2G'(x)G(x)$$ So $$ f'(x) = 2e^{-x^2} \int_0^x e^{-u^2} \ du$$
Hint needed for Galois theory for a quartic without using the discriminant
If you call $$\pm\sqrt{\frac{3\pm\sqrt{-11}}2}$$ intimidating, then it is just as well that the Galois group isn't $A_4$ or $S_4$. Then any formula for the roots would be much more complex. The splitting field is $K=\Bbb Q(\sqrt{-11},\alpha,\beta)$ where $\alpha$ and $\beta$ are square roots of $\frac12(3+i\sqrt{11})$ and $\frac12(3-i\sqrt{11})$. Then $|K:\Bbb Q(\sqrt{-11})|$ is a factor of $4$, so the Galois group cannot be $A_4$ or $S_4$. Let's suppose that $\beta=\bar\alpha$. Then $\alpha\beta$ is the positive square root of $\frac14(3+i\sqrt{11})(3-i\sqrt{11})=5$. Therefore $\alpha \beta=\sqrt5\in K$. As $\sqrt5$ is not a square in $\Bbb Q(\sqrt{-11})$ then $|K:\Bbb Q(\sqrt{-11})|=4$ from elementary Kummer theory. Thus $|K:\Bbb Q|=8$ and so the Galois group is $D_4$.
how many 2x2 matrices are invertible in mod p
The answer is $(p^2-1)(p^2-p)$, since there are $(p^2-1)$ ways to choose the first column such that it is non-zero, then the second column can be chosen in any way out of the $p$ multiples of the first column.
Conditions for existence of a bounded operator on Hilbert space
This is not possible under your general assumptions. Consider the Hilbert space $H=l^2$. Set $g_n=e_n$, $e_n$ the unit sequences. Set $f_1=e_1$ and $f_n=e_1 + n^{-1} e_n$ for $n\ge2$. Assume that there is such a linear mapping with $Af_n=g_n$ for all $n$. Then $A$ is unbounded: Take $n\ge2$ and compute $$ Ae_n = A(nf_n-ne_1) = n (e_n-e_1). $$ If one assumes that $(f_n)$ is an orthonormal sequence and $\sum_{n=1}^\infty \|g_n\|^2&lt;\infty$ then it works. Set $$ Ax:=\sum_{n=1}^\infty \langle x,f_n\rangle g_n, $$ which has the desired mapping properties. Moreover, $$ \|Ax\|\le \sum_{n=1}^\infty |\langle x,f_n\rangle|\cdot \| g_n\| \le \left(\sum_{n=1}^\infty |\langle x,f_n\rangle|^2\right)^{1/2}\left(\sum_{n=1}^\infty \| g_n\|^2 \right)^{1/2} \\ \le \|x\| \left(\sum_{n=1}^\infty \| g_n\|^2 \right)^{1/2}. $$
How to list graphs systematically?
Let us take $n = 6$ and let $G$ be a cubic graph of order $6.$ What you can do is split your analysis depending on some invariants of $G.$ Following is an example of this. If $G$ is bipartite then (since bipartitions of a regular graphs have to have the same size) you're left with only one choice $G = K_{3,3}.$ If $G$ is not bipartite then it has to contain an odd cycle and clearly its girth has to be $3.$ Now if $T$ is a triangle of $G$ then every vertex of $T$ has precisely one neighbor not in $T$ and after adding these neighbors you have just one way to complete the obtained graph to a cubic one. I hope this is useful to you in case you wish me to write the same analysis for $n = 8$ let me know.
Show that the stabilizer $G_Y$ is closed under multiplication.
You need to show that for all $g_1,g_2 \in G_Y$, $g_1g_2 \in G_Y$ as well. And to do so, you need to show that for all $y \in Y$, $g_1g_2y = y$. Since $g_1 \in G_Y$ you have $g_1y = y$ for all $y \in Y$. Similarly for $g_2$. Hence $$(g_1g_2)y = g_1(g_2y) = g_1y = y$$
Gamblers ruin difference equation says he'll never reach goal if $p=\frac 1 2$
What you are missing is that if $p=\frac{1}{2}$, then the two roots of the characteristic polynomial coincide. The general solution becomes $$s_c(n)=c_11^n + c_2n1^n=c_1+nc_2$$
KKT optimisation - condition of inequality constraint being zero
The condition $\mu \cdot g(x_1,x_2) = 0$ is a complementary slackness condition. It says that either the dual multiplier $\mu$ is 0, or there is no slack in the $g(x_1,x_2) \le 0$ constraint. The dual values give the change in the optimal objective function value if the right-hand side of the constraint changes. So the intuition behind the complementary slackness condition is that if there is slack in the constraint ($g(x_1,x_2) \lneqq 0$), then changing the RHS would have no effect on the optimal objective function value, hence $\mu=0$. And, if $\mu \ne 0$, then a change in RHS would lead to a change in the objective function, so the constraint must be tight ($g(x_1,x_2) = 0$).
Bernoulli First Order ODE
You switched one sign too many in $$ -\frac12v'+\frac2xv=\frac5{x^2} $$ Then $$ \left(\frac{v}{x^4}\right)'=\frac{v'}{x^4}-\frac{4v}{x^5}=-\frac{10}{x^6} \implies \frac{v}{x^4}=\frac2{x^5}+C $$ etc.
How do i show that $x^n$ converges on $|x|=1$ only if $x=1$?
first of all when we talk about radius of convergence we talk about series not sequences, you should say that the radius of convergence of this series is one$$\sum_{n=0}^{\infty}x^n$$ and now you want to stuyd the case when $|x|=1$ namely when $x=1$ and when $x=-1$. I'll assume that you know the divergent test which states that if $\sum_{n=0}^{\infty}a_n$ converges we must have $\lim_{n\to \infty}a_n=0$. in the case $x=1$ you will have $a_n=1$ which does not go to zero as $n$ go to infinity. similarly, when $x=-1$. which implies that the series diverges when $|x|=1$ and hence the interval of convergence is $(-1,1)$.Edit: if we are in the complex case and $|x|=1$ then you will have $a_n=x^n$ does not converge to zero(otherwise we will have the following $1=|x|=|x|^n=|x^n|\to 0$ which is a contradiction) which implies that the series $\sum_{n=0}^{\infty}a_n$ does not converge.
what does superscript zero in set notation?
$A^o$ is the interior of A. It's written with a small o. The overline for closure and the superscript o for interior is inferior two dimensional notion. For example, $\overline {{\overline {A^o}}^o} = \overline {A^o}$. Some mathematicians use $A^{o-o-} = A^{o-}$.
Proving double inequalities using well-ordering property
My guess is that this is what you want to prove: For $n \in \mathbb{N}^+$, let $R(n) =\{x | \frac1{n+1} &lt; x \le \frac1{n}\} $. Then, $0 &lt; x \le 1 \implies \exists n \in \mathbb{N}^+ $ with $x \in R(n) $. Here is my proof (as usual, off the top of my head with editing as I go). Since $0 &lt; x \le 1$, $1 \le \frac1{x}$. Let $y = \frac1{x}$. By the axiom of Archimedes. there is an integer $m$ such that $m \gt y$. Let $G(y) =\{m | m \gt y\} $. We have shown that $G(y)$ is non-empty. Also, all integers $j \le 1$ are not in $G(y)$. By the well-ordering principle, $G(y)$ has a smallest member. Call this $k$. For this, we must have $k \gt y$ and $k-1 \le y$. Therefore $\frac1{k} &lt; \frac1{y} \le \frac1{k-1}$ or $\frac1{k} &lt; x \le \frac1{k-1}$, so that $x \in R(k)$. And we are done.
Checking continuity looking whether image set is interval or not
Let $x_0 $ be arbitrary chosen from $A$ Without loss of generality assssume $A=[a,b]$(check that) By monotonicty of $f$ (assume to be increasing) sup $_{a\leq x&lt;x_0}f(x)=f(x_0^-)\leq f(x_0)\leq f(x_0^+)=inf _{x_0&lt;x\leq b}f(x)$ If $f(x_0^+)&gt;f(x_0)$; From the last inequality,there exists a strictly decreasing sequence $x_n$ in $ (x_0,b]$ such that $x_n\rightarrow x$ and $f(x_n)$ converges to $f(x_0^+)$ so $f(x_n)&gt;f(x_0^+)&gt;f(x_0)$ Since $f(A)$ is an interval so there exists $x^{'}$ such that $f(x^{'})=f(x_0^+)$. Again $f(x^{'})$=inf$_{x_0&lt;x\leq b}f(x) \leq$ inf$_{x_0&lt;x\leq x^{'}}f(x)$ since $(x_0,x^{'}]\subseteq (x_0,b]$ Also by monotinicity of $f$ inf$_{x_0&lt;x\leq x^{'}}f(x)$ Thus $f(x_0^+)=f(x_0)$; similarly prove $f(x_0^-)=f(x_0)$;
Is my entropy calculation correct? Clustering entropy example
One step is missing, you must do the overall computation: (N1/N)*FirtCluster_Entropy + (N2/N)*SecondCluster_Entropy + (N3/N)*ThirdCluster_Entropy Where N1=6, N2=6 and N3=5 correspond to length of each cluster, and N=17 is the total amount of objects. Then H = (6/17)(0.650022421648) + (6/17)(1.25162916739) + (5/17)(0.970950594455) = 0.956744853323706
Let X be an infinite Hausdorff space. Suppose, x is the limit point of $A$ $\subset$ $X$), s.t. x is a limit point of A.
Suppose $A \subseteq X$ and $x$ is a limit point of $A$, and $X$ is Hausdorff. This means that for every open neighbourhood $U$ of $x$, $U$ intersects $A \setminus \{x\}$. Suppose now that $U \cap A$ were finite for some open $U$. Fact: in a Hausdorff space all finite sets are closed. (In fact the latter is equivalent to $X$ being $T_1$ and Hausdorff implies $T_1$) so $V:= X\setminus ((U \cap A) \cup \{x\})$ is open and contains $x$. Now $(V \cap U) \cap A \subseteq \{x\}$ : $y \in (V \cap U) \cap A$ implies $y \in U \cap A$ but as $y \in V$ too, $y=x$. This shows that the open neighbourhood $U \cap V$ of $x$ witnesses that $x$ is not a limit point of $A$, contrary to assumption. So $U \cap A$ can never be finite, for $U$ an open neighbourhood of $x$. We only use Hausdorff mildly; $T_1$ is all we need.
The character of a newform
See Chapter 9 of William Stein's book, in particular Definition 9.3.
Convergent sequence in metric space
First note that if we found a metric in $A$ such that every creasing sequence where convergent, we'd win, because, given that: $\implies$) let call $x$ the limit and define $\forall n\in\mathbb{N},\phi(n)=x_n$ and $\phi(\infty)=x$, so this sequence is continuous as is sequentially continuous ($f:A\rightarrow X$ is sequentially continuous if $\forall (n_k)_k\subset A,(n_k)_k$ convergent $\implies (f(n_k))_k$ convergent and $\mathrm{lim}f(n_k)=f(\mathrm{lim}n_k)$) $\impliedby$)As $\phi$ is continuous take $x_{\infty}=\phi(\infty)$ and as $n\rightarrow\infty$, then $\phi(n)=x_n\rightarrow x_{\infty}=\phi(\infty)$ Then we simply define: $\forall n,m\in\mathbb{N},d(n,m)=\frac{|n-m|}{|n-m|+1},d(n,\infty)=d(\infty,n)=1,d(\infty,\infty)=0$
Condition to have an almost everywhere property
What you would like to do is simply take $$f=1_{g&gt;1},$$ but this function is usually not continuous. However, it is measurable and bounded and hence integrable over $[0,T]$, so you can approximate it with a sequence $(f_n)$ of continuous functions, since $C([0,T])$ is dense in $L^1([0,T])$ with respect to the $L^1$-norm. You can then check that $$ \bar f_n(x):= \min\{\max\{f_n(x),0\},1\}, \quad x\in[0,T],$$ defines a non-negative continuous function for all $n \in \Bbb N$ and the sequence $(\bar f_n)$ also converges to $f$ with respect to the $L^1$-norm. Without loss of generality, we can assume that it also converges almost everywhere (since any $L^1$-convergent sequence has an almost everywhere convergent subsequence). Now $$ |\bar f_n(1-g)|\le 1+|g| $$ and dominated convergence yields $$0\ge\int_0^T 1_{g(t)&gt;1}(1-g(t))dt=\int_0^T f(t)(1-g(t))dt=\lim \int_0^T \bar f_n(t)(1-g(t))dt \ge 0.$$ Hence, the non-positive function $1_{g(t)&gt;1}(1-g(t))$ must vanish almost everywhere. This is only possible, if $g\le 1$ almost everywhere. PS: Even if for whatever reason we could not take a subsequence, this would not be a problem, since $L^1$ convergence implies convergence in measure by Markov's Inequality, and the Dominated Convergence Theorem actually works for convergence in measure. Apart from simplicity, the only reason that it is more commonly stated with convergence almost everywhere is that this version holds for any measure, while convergence in measure is only meaningful for $\sigma$-finite measures. But now I really start to digress...
How to say about the following result about hilbert spaces?
You can apply the lemma $n$ times to prove that $\{1,x,x^2,...,x^{n-1}\}$ is closed, but it doesn't hold at infinity. It would be like saying that the union of infinitely many closed sets is closed. In fact $P$ is the union wity $n$ from $0$ to infinity of $\{1,...,x^{n}\}$
What is a good rule to understand how inequity shading works?
Rules that always work are: If the equation of the line is in the form $x = ay + b,$ then the region $x \leq ay + b$ will be on the left of the line and the region $x \geq ay + b$ will be on the right of the line. If the equation of the line is in the form $y = ax + b,$ then the region $y \leq ax + b$ will be below the line and the region $y \geq ax + b$ will be above the line. These rules work because the lesser value of $x$ is always on the left of the greater value of $x$, and the lesser value of $y$ is always below the greater value of $y.$ But these rules do not always work when you mix $x$ and $y$ together on the same side of an inequality. In the case of $x - y \leq 5,$ the left hand side can become less if you decrease $x$ (moving to the left), but it can also become less if you increase $y$ (moving upward). To disambiguate this, you can rewrite the inequality with just one variable on each side, for example, $x \leq y + 5$ is completely equivalent to $x - y \leq 5,$ but because it is in the form $x \leq ay+b$ we see that the shaded region must be on the left side of the line, which it is. Another way to figure out which side to shade is: Pick a point that is not on the line. Take the $x$ and $y$ coordinates of that point and plug them into the inequality. Is the inequality true with these values plugged in? If so, the point you chose is on the side where the shading should be. If not, the shading should be on the opposite side.
How to prove that $\angle{APC}+\angle{AYC}=180$?
I will try to give two solutions. The first solution is a direct synthetic solution, for this i need a simple lemma, then use inversion. The second solution is a prosaic solution, added in order to show what can be done analytically in short straightforward computations in such cases, and although such a solution breaks the beauty, such a way to go may be useful in mathematical competitions. Finally, observe that one can give also an "inverse solution" in the sense that we construct points $X'$, $Z'$ instead of $X$, $Z$ as in the comment of Mick, showing afterwards that $X=X'$, $Z=Z'$. (From the alternative construction, the properties used are better suited to be combined with the given situation.) Let's go. Direct solution: We need the following... Lemma: Let $\Delta ABC$ be a triangle, let $H$ be its orthocenter, let $O$ be the center of the circumscribed circle, and we denote this circle by $(O)$ or $(ABC)$. On the line $BH$ we consider the following points: $G$, its intersection with $AC$; $D^*$, its intersection with the circumcircle $(O)$; and $P^*$, the reflection of $B$ with respect to $G$, i.e. $BG=GP^*$. On the line $CH$ we consider $E$, its intersection with $AB$. Then the line $AP^*$, the line $CD^*$, and the parallel to the line $BHGD^*P^* $ through $E$ intersect in a point, $X^*$. In other words, the reflection w.r.t. line $AGC$ maps the points $B,E,H$ respectively in $P^*,X^*, D^*$. In particular, the trapezoid $AEX^*P^*$ is isosceles, thus cyclic. The four points $B,F,D^*,X^*$ are on a circle. Proof of the lemma: We have $$ \widehat{HCA}=\frac \pi2-\hat A= \widehat{HBA}= \widehat{D^*BA}= \widehat{D^*CA}\ . $$ The points $B,H$ are thus reflected in $D^*,P^*$, and since $A,C$ are invariated by the reflection, the intersection $E=AB\cap CF$ is reflected in $X^*=AP^*\cap CD^*$. Then $EX^*$ is perpendicular on the axis of reflection, as $AP^*$ is, too, so $EX^*\|AP^*$. To see $BFD^*X^*$ cyclic, we test if there is a same power of $C$ with respect $B,F$, and with respect to $X^*,D^*$, and indeed $$ CF\cdot CB=CH\cdot CE=CD^*\cdot CX^*\ . $$ $\square$ Now we place the OP in the foreground, and restate and show, recalling and enriching the data of the problem: Let $\Delta ABC$ be a triangle with orthocenter $H$. The heights of this triangle are (denoted in a slightly unusual manner) $AF$, $BG$, $CE$, with $E,F,G$ on the sides of the triangle. Let $D$ be the point of intersection $EF\cap BH$. Let $P$ be the mid point of the segment $BH$. Let $X$ be the intersection $AP\cap CD$. Let $Z$ be the intersection $CP\cap AD$. We also introduce the inversion denoted by a star, $W\to W^*$, centered in $B$ and with power $$ BE\cdot BA=BH\cdot BG=BF\cdot BC\ . $$ Then we have the following: (1) $A=E^*$, $E=A^*$; $G=H^*$, $H=G^*$; $F=C^*$, $C=F^*$. (2) $D^*$ (the image of $D$ by this inversion) is the intersection of the line $BHG$ with the circumcircle $(ABC)$. (3) $P^*$ is the symmetric point of $B$ w.r.t. $G$. (Because of $2BP=BH$, and $H^*=G$.) (4) $X^*$ is the intersection point of $BX$, $AP^*$, $CD^*$, and the circle $(BXP^*)$. Moreover, $CX^*\perp AX^*$. (5) The quadrilaterals $(BXDF)=(\infty X^*D^*C)^*$, $(BPXE=\infty P^*X^*A)^*$ are cyclic. (6) The angles in $X,Z$ in the quadrilateral $XPZD$ are right angles. (7) Let $Y$ be the symmetric point of $D$ w.r.t. to the point reflection in $G$, then $APCY$ is cyclic. Proof: The above points were collected for an easy structure of the proof. Most of them are clear when stated, we will only touch the hard points. Note that the inversion is well defined, since the power of $B$ w.r.t. the circles $GHEA$, $GHFC$ give the equality $BE\cdot BA=BH\cdot BG=BF\cdot BC$. Now (1), (2), (3) are immediate. (4) follows from the previously isolated lemma. Indeed, $X$ is characterized by being on the lines $\infty AP$ and $\infty CD$, so after applying the inversion $X^*$ is characterized by being on $(\infty AP)^*=(BEP^*)$ and $(\infty CD)^*=(BFD^*)$. So this is exactly the point denoted by coincidence $X^*$ in the lemma, and it also lies on $BX$, $AP^*$, $CD^*$. (5) is clear, the inversion of a line is a circle through the center of inversion. (6) uses (4), $$ \begin{aligned} \widehat{PXD} &amp;= \widehat{BXD} - \widehat{BXP} \\ &amp;= \widehat{BD^*X^*} - \widehat{BP^*X^*} \\ &amp; = \widehat{D^*X^*P^*} = \widehat{CX^*P^*} =90^\circ\ . \end{aligned} $$ (7) $$ \widehat{AYC}+ \widehat{ADC} = \widehat{APC}+ \widehat{XDZ} = 180^\circ\ . $$ $\square$ Note: The picture comes with further smog, showing some bonus relations, one can show "the other" related properties to obtain "the other" (corresponding) solution. The essence was isolated in the lemma. Solution by computation: We show that $GA\cdot GC=GP\cdot GY$. (A posteriori, we can affirm as a matter of terminology that $G$ has this same power in the cyclic quadrilateral $(APCY)$. Showing the relation is a proof of the cyclicity.) Let us denote by $A,B,C$ the (measures of the) angles in $\Delta ABC$, by $a,b,c$ the (lenghths of the) corresponding opposite sides. Let $R$ be the radius of the circumcircle. Then we have: $$ \begin{aligned} GA &amp;= c\cos A=2R\; \sin C\cos A\ ,\\ GC &amp;= a\cos C=2R\; \sin A\cos C\ ,\\ GA\cdot GC &amp;= 4R^2 \; \sin A\cos A\; \sin C\cos C\ ,\\[2mm] GP &amp;= \frac 12(GB+GH)\\ &amp;=\frac 12(c\sin A+AG\underbrace{\tan\widehat{HAG}}_{\cot C})\\ &amp;=\frac 12(2R\; \sin A\sin C+2R\; \sin C\cos A\cdot\frac{\cos C}{\sin C})\\ &amp;=R\cos(A-C)\ ,\\ GD &amp; = GB - DB\\ &amp;=c\sin A -\frac{BE\cdot BA}{BD^*}\\ &amp;=2R\;\sin A\sin C -\frac{a\cos B\cdot c}{2\cdot R\sin\frac {\widehat{BOD^*}}2}\\ &amp;=2R\;\sin A\sin C -2R\;\frac{\sin A\cos B\cdot \sin C}{\sin \widehat{BAD^*}}\\ %&amp;=2R\;\sin A\sin C\Big(\ 1 -\frac{\cos B}{\sin (A+\widehat{CAD^*})}\ \Big)\\ &amp;=2R\;\sin A\sin C\Big(\ 1 -\frac{\cos B}{\cos(A-C)}\ \Big)\ . \end{aligned} $$ At the last passage we have used $\widehat{BAD^*} =A+\widehat{CAD^*} =A+\widehat{CBD^*} =A+(90^\circ-C)$, so the sine of this angle is the cosine of $(A-C)$. This implies $$ \begin{aligned} GP\cdot GD &amp;= 2R^2\; \sin A\sin C\; \Big(\ \cos(A-C) -\cos B\ \Big)\\ &amp;= 2R^2\; \sin A\sin C \cdot (-2)\sin\frac{A-C+B}2\sin \frac{A-C-B}2\\ &amp;= -4R^2\; \sin A\sin C \;\sin\frac{180^\circ-2C}2\sin \frac{2A-180^\circ}2\\ &amp;= 4R^2\; \sin A\sin C \;\cos A\cos C\\ &amp;= GA\cdot GC\ . \end{aligned} $$ $\square$
How to obtain the common numbers between this 2 sequences?
The statement that if $n=6k \pm 1$ then $c$ exists is false, and that it would be divisible by $144$ is even more false. Setup First realize that the sum of the first few odd numbers gives perfect squares. $$ \begin{align} 1^2 &amp;= 1 \\ 2^2 &amp;= 1+3 \\ 3^2 &amp;= 1+3+5 \\ 4^2 &amp;= 1+3+5+7 \\ n^2 &amp;= \sum_{k=0}^n 2k+1 \end{align} $$ It's easy to find lots of ways to prove this fact, I leave this to you. Next, notice that whatever the common sequence number $c$ is, we have that $$ n = (a^2 - c) - (b^2 - c) = a^2 - b^2 $$ And since we reach the common number by subtracting odd numbers starting from $1$, this means that the expressions $a^2-c$ and $b^2-c$ must be perfect squares. Since they are perfect squares, we can write $$ x^2 = a^2 -c \\ y^2 = b^2 -c \\ n = x^2-y^2 = (x+y)(x-y) $$ Notice that since we choose $n$ to be an odd number, only an odd number times another odd number can give us an odd number. Therefore, $x+y$ and $x-y$ must both be odd, which guarantees $x$ and $y$ both are integers. Working Backwards The trick to understanding this problem is actually working our way backwards. Once we have $a$, $b$, $x$, and $y$, we know that $$ a^2-x^2 = b^2-y^2 = c $$ Pick any two odd numbers to multiply together, for instance: $$ \begin{align} 5 \times 3 = 15 &amp;= 8^2 - 7^2 \\ &amp;= (4+1)(4-1) \end{align} $$ And therefore, $8^2 - 4^2 = 7^2 - 1^2 = c = 60$. This is why $c$ only exists if $n$ is not prime. We already showed that as long as $n$ can be factored, it can be factored into odd factors, and we can then run this process to find $c$. (Technically, we still need to prove the converse, that there is no other way to obtain a common value $c$, but all the statements made so far can easily be tweaked to be biconditional.) This also means that some $n$ values will give us multiple values for $c$, as long as the number can be represented as products in more than one way. See: $$ \begin{align} 45 = 23^2 - 22^2 &amp;= 15 \times 3 &amp;= (9+6)(9-6) \\ &amp;= 9 \times 5 &amp;= (7+2)(7-2) \end{align} $$ You end up with both: $$ 23^2-9^2 = 22^2-6^2 = 448 \\ 23^2-7^2 = 22^2-2^2 = 480 $$ Confirm with the following sequences: $$ 529, 528, 525, 520, 513, 504, 493, \color{red}{480}, 465, \color{blue}{448}, 429, 408, 385, 360, 333, 304, 273, 240, 205, 168, 129, 88, 45 \\ 484, 483, \color{red}{480}, 475, 468, 459, \color{blue}{448}, 435, 420, 403, 384, 363, 340, 315, 288, 259, 228, 195, 160, 123, 84, 43 $$ Conclusion The common value $c$ only exists if $n$ is not prime. In fact, if $n$ has $k$ factors, then there are $\frac{k}{2}-1$ values for $c$ (rounded up if $n$ is a perfect square). We have not made any statements about the divisibility of $c$, but it is trivial to at least show it must be even.
Rewriting the domain of my integral using a function for calculating the center of mass
We want to find the following integral over $\Omega$ : $$ y_\mathrm{s} = \frac{3}{2}\int \limits_\Omega y \, \mathrm{d} \mu (x,y) \, .$$ We can also integrate over $B$ instead if we find a suitable function $f$ : $$ y_\mathrm{s} = \frac{3}{2} \int \limits_B y f(x,y) \, \mathrm{d} \mu (x,y) \, .$$ We need to make sure, however, that we only integrate over the part of $B$ on which $y \leq x^2$ holds. We cannot do this by choosing $f(x,y) = x^2$ , since this would lead to a totally different integral which has nothing to do with the centre of mass. Instead we can use the Heaviside step function $H$ and let $f(x,y) = H(x^2 - y)$ . Then $f(x,y) = 1$ holds for $(x,y) \in \Omega$ (except possibly for a set of measure zero) and $f$ vanishes on $B \setminus \Omega$ , so the two integrals are indeed equal. This is the correct way to incorporate the condition $y \leq x^2$ into the integration over $B$. We then end up with the integral $$ y_\mathrm{s} = \frac{3}{2} \int \limits_{-1}^1 \int \limits_0^1 y H(x^2-y) \, \mathrm{d} y \, \mathrm{d} x = \frac{3}{2} \int \limits_{-1}^1 \int \limits_0^{x^2} y \, \mathrm{d} y \, \mathrm{d} x \, . $$ Note that the final result is $y_\mathrm{s} = \frac{3}{10}$ and not $y_\mathrm{s} = \frac{1}{10}$ though.
What is so special about negative numbers $m$, $\mathbb{Z}[\sqrt{m}]$?
One way to think about this is in terms of the reduction theory of positive definite binary quadratic forms over $\mathbb{Z}$: see for instance Cox's book Primes of the form $x^2 + ny^2$, or, for a short introduction, these notes. The key result here is that for each positive definite primitive binary quadratic form there is a unique Minkowski-reduced form equivalent to it. If you are looking at forms $x^2 + ab y^2$ with $1 \leq a \leq b$, $\operatorname{gcd}(a,b) = 1$, then you find that both $q_1(x,y) = x^2 + ab y^2$ and $q_2(x,y) = a x^2 + by^2$ are Minkowski reduced, so the class number of the quadratic order of discriminant $-4ab$ must be at least $2$. This is not the whole story, because you also have to deal with discriminants of the form $D \equiv 1 \pmod 4$, but a similar analysis can be done here. Anyway, this gives you some feel for what is different between imaginary and real quadratic fields: the associated binary quadratic forms behave very differently. Added: After seeing Weaam's (correct) answer, I looked back at the question and saw that it is asking something easier than I had thought: the question asks why $\mathbb{Z}[\sqrt{-m}]$ is not a PID for certain integers $m &gt; 0$. But as Weaam points out, it is easy to show that $\mathbb{Z}[\sqrt{-m}]$ is not a UFD for any $m &gt; 2$: this also occurs in my lecture notes for the course I taught (more or less) based on Cox's book: see Corollary 6 of these notes (the first handout in the course). What I say above is also a complete solution to this question -- well, at least as long $m$ is not a square; in that case one could just argue that the ring is not integrally closed -- but is more complicated than is necessary. What I was indicating is that for squarefree composite $m &gt; 0$, the class number of the full ring of integers of the imaginary quadratic field $\mathbb{Q}(\sqrt{-m})$ is greater than $1$. This does seem to be most easily handled by elementary reduction theory / genus theory, so far as I know. For yet another closely related result, see Landau's Theorem at the end of the latter set of notes.
Is $\int \frac{1}{x} dx = \int \frac{dx}{x}$?
Yes, $\int \frac{dx}{x}$ is simply shorthand for $\int \frac1x \, dx$; they mean precisely the same thing.
show $X=[0,1]^\omega$ has no metric which defines the box topology
The point $0=(0,0,0,\ldots,0,\ldots)$ (or in fact any point in $[0,1]^\omega$) does not have a countable local base, so the space cannot be metrisable (because then $B(x,\frac{1}{n})$ would have been a countable local base at $x$). Suppose that $U_n$ is such a local base at $0$, then for each $n$ pick a sequence $(a^{(n)}_k) \in [0,1]^\omega$ such that $0 \in \prod_k [0, a^{(n)}_k) \subseteq U_n$, this can be done as such sets form a base for the box topology,and $[0,e), e&gt;0$ is a basic open set for $0$ in $[0,1]$. Then define $O =\prod_k [0, \frac{a^{(n)}_n}{2})$. This is a neighbourhood of $0$ but no $U_n \subset O$, as witnessed by the $n$-th component of $O$ and $U_n$.
Series convergence radius proximity
Probably you have some confusion. Given series $\sum_{n=1}^\infty (x-4)^n$.we know that this power series will converge for $x=4$(why!). Now I am going to use ratio test you can also use root test. \begin{equation} |\frac{a_{n+1}}{a_n}|=|\frac{(x-4)^{n+1}}{(x-4)}|=|(x-4)|=L(say). \end{equation} Then series will converge if $L&lt;1$. If $L=1$ then you cannot decide. If $L&gt;1$ then series will diverge. If $L&lt;1$, then: $|x-4|&lt;1 \Rightarrow 3&lt;x&lt;5$. If $L=1$, then: $|x-4|=1 \Rightarrow x=3,5$. If $x=3$ then the series surely divergent also if $x=5$(By necessary condition of convergence of a series). \begin{equation} ROC= \frac{upper ~value-lower ~value}{2}=\frac{5-3}{2}=1. \end{equation}
Inseparable, irreducible polynomials
Let $p \in \mathbb N$ be prime, $q \in \mathbb N$ coprime to $p$, and let $F = \mathbb F_p(t)$ the field of rational functions of $t$ with coefficients in $\mathbb F_p$. Consider $$ f(x) = x^{pq} - t. $$ EDIT : By Eisenstein's criterion, $x^{pq} - t$ is irreducible over $\mathbb F_p[t]$ (because $t$ is a prime in there). By Gauss' Lemma, it is also irreducible over the field of fractions, which is $\mathbb F_p(t)$. Thanks to Sam L. for this part of my argument. Since the derivative of $f$ is zero in $\mathbb F_p(t)[x]$, the polynomial is inseparable. But the polynomial $x^q - 1$ is separable in $\mathbb F_p(t)[x]$, because its derivative is $qx^{q-1}$, which has no common roots with $x^q - 1$, so that the roots of $x^q - 1$ are distinct. Now letting $\sqrt[pq]t$ be a root of $x^{pq} - t$ and $w$ a $q^{th}$ root of unity. Then the distinct roots of $f$ are $w^i (\sqrt[pq]t)$, with $i$ ranging from $0$ to $q-1$, each with multiplicity $p$. Hope that helps,
generating function for k-combinations
Notice the following. When you place $$(1+x)^n=(1+x)(1+x)\cdots (1+x),$$ and you unfold the product, every multiplicand will contribute with either a $1$ or an $x.$ Call the multiplicands $(1+x)^n=p_1\cdots p_n,$ where $p_i=1+x.$ If you decide to pick the $x$ in the $i-$th multiplicand, is like you are considering one more (the exponent of the $x$) in the size of the sets that you want. So this corresponds to adding $i$ to the set. In that way, the coefficient of $x^k$ in the expansion corresponds to the number of ways to create a set of size $k$ out of $n$ elements, or $\binom{n}{k}.$ For example, if $n=3$ you have $$(1+x)(1+\color{red}{x})(1+\color{blue}{x})=\underbrace{1\cdot 1\cdot 1}_{\emptyset}+\underbrace{x\cdot 1\cdot 1}_{\{1\}}+\underbrace{x\cdot \color{red}{x}\cdot 1}_{\{1,2\}}+\cdots +\underbrace{1\cdot 1\cdot \color{blue}{x}}_{\{3\}}+\cdots +\underbrace{x\cdot \color{red}{x}\cdot \color{blue}{x}}_{\{1,2,3\}}.$$ Notice that in general, unfolding the product gives rise to $2^n$ summands (that is the beauty of the binomial theorem, it puts together the $n+1$ size possibilities).
A property of fields
I assume $n=|G|$. Use the following lemma : Lemma 1. Let $M$ be a monoid and $f_i: M\to K^\times$ a family of different monoid morphisms. Then the $(f_i)$ form a linearly independent family in $\mathcal{F}(M,K)$ (the $K$-vector space of functions $M\to K$). This lemma is very instructive to prove so I'll leave it to you (it's fundamental in Galois theory); it's essentially a generalization of the fact that eigenvectors for different eigenvalues are automatically independant. How does that help us ? Well it helps us in the following way : the $\sigma : K^\times\to K^\times, \sigma \in G$ form a family of different monoid morphisms, therefore they are linearly independent in $\mathcal{F}(K^\times, K)$. We then use another lemma: Lemma 2. Let $K$ be a field, $X$ a set and $f_1,...,f_n : X\to K$ be a family of linearly independent functions. Then there are $x_1,...,x_n \in X$ such that the matrix $(f_i(x_j))_{i,j}$ is invertible. This lemma is a bit more complicated but not that much, here's a proof : We prove the lemma by induction on $n$. For $n=1$, it's clear: if $f:X\to K$ only took $0$ as a value, then it wouldn't be linearly independent, so there's $x\in X$ such that $f(x)\neq 0$. Going from $n$ to $n+1$ : assume the result holds for any family of $n$ linearly independent functions, and let $f_1,...,f_{n+1} : X\to K$ be linearly independent. In particular, $f_1,...,f_n$ are linearly independent, so we find $x_1,...,x_n$ as in the lemma. Now consider $F:x\mapsto \det (f_i(x_j))_{1\leq i,j \leq n+1}(x_{n+1} = x)$. This is a map $X\to K$. Let's prove that it takes a nonzero value: if it does we'll put $x_{n+1}$ to be one of the points where it does, so we'll be through. As it turns out, $F(x) = \displaystyle\sum_{i=1}^{n+1} M_i f_i(x)$ where $M_i$ is a well-chosen minor (use the last row to develop the determinant). Note that $M_{n+1} = \det (f_i(x_j))_{1\leq i,j\leq n}$ up to a sign, so $M_{n+1}\neq 0$. Therefore $F$ can't be the zero function, as a nonzero linear combination of the linearly independent $f_i$: it takes a nonzero value; and we are done with the induction. Where does that leave us ? Well simply apply lemma $1$ the $(\sigma)_{\sigma\in G}$ to get that they are linearly independent in $\mathcal{F}(K^\times, K)$, and then apply lemma $2$ to get $x_\tau, \tau \in G$ such that the matrix $M=(\sigma(x_\tau))_{\sigma, \tau \in G}$ is invertible. Then find $Y=(y_\tau)_{ \tau \in G}$ such that $MY = (\delta_{\sigma, id_K})_{\sigma\in G}$ (such a $Y$ exists because $M$ is invertible). Writing out what this means : for all $\sigma \in G$, $\displaystyle\sum_{\tau \in G}\sigma(x_\tau)y_\tau = \delta_{\sigma, id_K}$. Now order $G$ to get your $x_i$'s, $y_i$'s. Bonus : a second proof of lemma 2 not involving determinants : let $V$ be the (finite-dimensional) sub-vector space of $\mathcal{F}(X,K)$ generated by the $f_i$. For $x\in X$ consider $ev_x : V\to K, f\mapsto f(x)$. Then by definition of the zero map, $\displaystyle\bigcap_{x\in X}\ker (ev_x) = \{0\}$. Now argue with dimensions that some $x_1,...,x_n$ must exist so that $\displaystyle\bigcap_{i=1}^n\ker (ev_{x_i}) = \{0\}$: those are your $x_i$
Find values for x such that A is not invertible.
I obtain $$detA=-4 x^2 + 6 x + 2=0\implies x=\frac34\pm \frac{\sqrt {17}}4$$ To find the values for which $det A=0$, you could simplify the matrix as follow $$\begin{bmatrix} 3 &amp; 1 &amp; 7-x \\ 3 &amp; 2-x &amp; 4 \\ 4 &amp; 2 &amp; 8 \\ \end{bmatrix} \to\begin{bmatrix} 3 &amp; 1 &amp; 1-x \\ 3 &amp; 2-x &amp; -2 \\ 2 &amp; 1 &amp; 0 \\ \end{bmatrix} $$
Ellipse and its eccentricity in terms of angle Φ
If you have a set of points given by a system of equations, then every linear combination of those equations is also satisfied by that set of points. Applying this idea to the problem at hand, we know that the auxiliary circle $x^2+y^2=a^2$ and the tangent line $\frac xa\cos\phi+\frac yb\sin\phi=1$ both pass through $Q$ and $R$. We look for a linear combination of these equations that also passes through the origin. This will occur when the constant term of the combined equation is zero. We can find suitable coefficients by inspection: the first equation minus $a^2$ times the second will eliminat the constant term. The resulting equation described an ellipse, which isn’t particularly useful, so a bit of clever mathematical trickery is brought to bear: squaring both sides of the tangent line equation makes it a degenerate conic consisting of the tangent line and its reflection in the origin. By symmetry, the intersection of this additional line with the auxiliary circle are the reflections of $Q$ and $R$ and so lie on the same lines through the origin, so adding this extra line doesn’t really change the conditions of the problem. Subtracting $a^2$ times this squared equation from that of the auxiliary circle produces a degenerate conic: a pair of lines (perhaps coincident) that intersect at the origin. In fact they are precisely the lines through $Q$ and $R$. This is a pretty slick way to generate the required lines through the origin without explicitly computing $Q$ and $R$. From here, one could split the conic to get the individual equations of the lines and then apply the constraint that they must be perpendicular, but I suspect that the book takes a simpler approach. These lines are the asymptotes of a family of hyperbolas. Those asymptotes are perpendicular when the hyperbolas are rectangular, which in turn occurs when the sum of the coefficients of the squared terms (i.e., the trace of the matrix of the associated quadratic form) in the equation is zero.
Evaluate the difference quotient for $f(x)=x^3$
I assume you mean $f(x) = x^3$, in which case $$f(a) = a^3$$ and $$f(a + h) = (a + h)^3$$. To simplify the quotient, just expand and cancel until you can't.
Should errors need to follow any pattern, or they can be random?
There are lots of different types and sources of error; some look random, some do not. For example, truncation error (e.g. replacing a convergent infinite series by the sum of a finite number of terms) tends to have a pattern: when convergence is rapid, the error may be dominated by the first omitted term. On the other hand, roundoff error tends to look random.
How to prove the following function is not Lipschitz continuous?
It is even not uniformly continuous on the real line: Take $x_{n}=\sqrt{2n\pi+\pi/2}$ and $y_{n}=\sqrt{2n\pi}$, then $x_{n}-y_{n}\rightarrow 0$ but $\sin(x_{n}^{2})-\sin(y_{n}^{2})=1$ does not converge to $0$ as $n\rightarrow\infty$. If it were Lipschitz continuous, then it is uniformly continuous.
Prove that if a finite solvable group is simple, it is a cyclic group of prime order.
Hints: $[G, G] \triangleleft G$, and $[G, G] \neq G$ for a solvable group $G \neq 1$ (why?).
Dual Of Integer Network Formulation
There are three errors: (1) Because $y_{ij}$ is associated to the constraint where the right hand side is 0 ($x_i - x_i \leq 0$), it should not appear in the objective. (2) The coefficients $w_i$ seem to be missing in the dual. (3) You currently do not have a dual variable associated to the constraint $x_i \leq 1$. This seems to be the correct dual to me: \begin{align} \min &amp; \sum_{i \in N} z_{i} \\[4pt] \text{s.t. } &amp; z_i + \sum_{j : (i,j) \in A} y_{ij} - \sum_{j : (j,i)\in A} y_{ji} \geq 0, \forall i \in N \\[10pt] &amp; y_{ij} \geq0 , \forall (i,j) \in A \\ &amp; z_i \geq 0, \forall i \in N \end{align}
Why adherent points of Natural Numbers are only natural numbers
If $x\in\mathbb R\setminus\mathbb N$, let $r$ be the distance from $x$ to the closest natural number. Then $(x-r,x+r)\cap\mathbb N=\emptyset$ and $(x-r,x+r)$ is a neighborhood of $x$.
Why is the general linear group a smooth manifold?
It is the inverse image of $\mathbb{R}\setminus{\{0\}}$ (which is open) under the continuous map that maps the matrices in $Ml(n,\mathbb{R})$ to the determinant. And the inverse image is open (continuity).
PDF of $\frac{X}{1+X^2}$ in terms of the PDF of $X$
You might be interested to know that, if $X$ has PDF $f_X$ and if $Y=h(X)$ for some function $h$ regular enough then, the PDF $f_Y$ of $Y$ is given by $$f_Y(y)=\sum_{x:h(x)=y}\frac1{|h'(x)|}f_X(x)$$ In your case, $$h(x)=\frac{x}{1+x^2}$$ hence $f_Y(y)=0$ for $|y|\geqslant\frac12$. For every $0&lt;|y|&lt;\frac12$, $h(x)=y$ if and only if $x=\xi_\pm(y)$, where $$\xi_\pm(y)=\frac{1\pm\sqrt{1-4y^2}}{2y}$$ Furthermore, $$h'(x)=\frac{1-x^2}{(1+x^2)^2}$$ hence $$h'(\xi_\pm(y))=\mp\frac{y\sqrt{1-4y^2}}{\xi_\pm(y)}$$ which yields, for every $0&lt;|y|&lt;\frac12$, $$f_Y(y)=\sum_{\pm}\frac{\xi_\pm(y)}{y\sqrt{1-4y^2}}f_X(\xi_\pm(y))$$ The actual computational details in the specific case that interests you may be slightly involved but (I hope it is apparent that) the method itself is straightforward, even when, as here, $h$ is not injective.
Beginner probability question about the phrase "order doesn't matter"
We can say that the order doesn't affect the probability. Of course, father choosing then son would be a different experiment from its opposite. Father and son are just labels and do not affect the probability of the event. If they do it simultaneously, then there are $10\times 10 = 100$ different outcomes, with $10$ where they choose the same dish. So $\frac{10}{100} = \frac{1}{10}$ of choosing the same dish, which is equivalent to $\frac{9}{10}$ of chosing different.
Prove that $y^2z^2 - y^2 -z^2$ is not a perfect square for any $y,z \in \mathbf{N}$
Looking at $y^2z^2-y^2-z^2$ modulo $4$, we see that both $y$ and $z$ must be even if that expression is a perfect square. Setting $y=2y_1, z=2z_1$, we get $$ y^2z^2-y^2-z^2=16y_1^2z_1^2-4y_1^2-4z_1^2\\ =4(4y_1^2z_1^2-y_1^2-z_1^2) $$ If this is a perfect square, then $4y_1^2z_1^2-y_1^2-z_1^2$ must also be a perfect square. And again, looking at it modulo $4$, we see that $y_1$ and $z_1$ must both be even. This continues indefinitely. Which cannot be done with positive integers. So there is no solution (unless $0$ is allowed).
If $X_n \rightarrow X$ in probability then $X_n \Rightarrow X$?
Here is a direct argument: first note that $|\mathrm e^{\mathrm ix}-\mathrm e^{\mathrm iy}|\leqslant\min\{2,|x-y|\}$ for every real numbers $x$ and $y$ hence $$|\varphi_n(t)-\varphi(t)|\leqslant2P[|X_n-X|\geqslant\varepsilon]+\varepsilon, $$ for every positive $\varepsilon$, where $\varphi_n(t)=E[\mathrm e^{\mathrm itX_n}]$ and $\varphi(t)=E[\mathrm e^{\mathrm itX}]$. Now, assume that $X_n\to X$ in probability. Then, for every fixed positive $\varepsilon$, $P[|X_n-X|\geqslant\varepsilon]\to0$, hence $\limsup\limits_{n\to\infty}|\varphi_n(t)-\varphi(t)|\leqslant\varepsilon$. This is valid for every positive $\varepsilon$ hence $\varphi_n(t)\to\varphi(t)$. This convergence holds for every $t$ hence $X_n\to X$ in distribution.
Regarding understanding of the following SVD code in matlab
Wikipedia is your friend: Applications of the SVD. Look at the pseudo-inverse and rank sections.
How many distinct roots $ax^5+bx^3+cx+d$ has
Let $f(x)$ be this polynomial. Then $f'(x) = 5ax^4 + 3bx^2+c&gt;0$ for any $x$ because all of the coefficients are $&gt;0$ and $x^2,x^4 \ge 0$. So, $f$ is strictly increasing everywhere. With $\lim_{x \to -\infty}f(x) = -\infty$ and $\lim_{x \to +\infty} f(x) = +\infty$, we conclude that $f(x) = 0$ has a unique real solution. This of course doesn't mean that $f(x) = 0$ has no complex solutions (it has two).
Slow variation of counting functions
I will prove what I said earlier in a comment, with a slight correction: $$A = \{n:2m!&lt;n&lt;2m+1!, m \in \mathbb{N}\} $$is a counter example to the proposed conjecture. To see this, consider the sequence $x_n = \lfloor {\frac {2n!}{\lambda}}\rfloor$. For this sequence, $A(\lambda x_n)~ 2n-2!(2n-2)$, because most of the elements of $A$ less than $\lambda x_n~2n!$ are the elements between $2n-2!$ and $2n-1!$. We also find that $A(x_n)~(1-\lambda) x_n= \frac {2n!}{\lambda}-2n!$, because eventually $2n&gt;&gt;\frac {1}{\lambda}$, so all numbers between $2n!$ and $\frac {2n!}{\lambda}$ will be less than $2n+1!$, and thus be elements of A. Putting these two formulas together, we get $$\frac {A(\lambda x_n)}{A(x_n)} ~\text {~} ~ \frac {2n-2!(2n-2)}{\frac {2n!}{\lambda}-2n!}= \frac {\lambda}{1-\lambda}\frac {2n-2}{(2n-1)(2n)}$$ which tends to $0$, thus the limit infinum is $0$.
Help with finite fields
The finite field $GF(2^8)$ is better thought of a collection of $7$th degree polynomials modulo $2$ and some (irreducible) $8$th degree polynomial $P$. Let's see what all of this means. First, each member of $GF(2^8)$ is of the form $$\sum_{i=0}^7 a_i x^i$$, where the $a_i$ are coefficients "modulo $2$", i.e. bits. When you add two of the $a_i$s, you calculate the answer modulo $2$, so it's the same as XORing. Adding two polynomials is easy: $$\sum_{i=0}^7 a_i x^i + \sum_{i=0}^7 b_i x^i = \sum_{i=0}^7 (a_i+b_i) x^i.$$ So addition corresponds to XOR. Multiplication is more involved. For AES the polynomial in question is $$P(x) = x^8 + x^4 + x^3 + x + 1.$$ Written in bits, it is $(100011011)_2 = (283)_{10}$. The point is that $x^8 = x^4 + x^3 + x + 1$ (since everything's mod $2$), so in order to multiply a polynomial by $x$, you do the following: Remember the MSB (that's the power of $x^7$). Shift the number left once (replacing $x^i$ with $x^{i+1}$). If the old MSB was $1$, XOR $(11011)_2$ (since $x^8 = x^4 + x^3 + x + 1$). Using this primitive, you can multiply two polynomials $A = \sum_{i=0}^7 a_i x^i$ and $B = \sum_{i=0}^7 b_i x^i$ as follows: Initialize the result $C = 0$. Add to $C$ the value of $a_0 B$, that is, if the LSB of $A$ is $1$, XOR $B$ to $C$. Add to $C$ the value of $a_1 x B$, that is, if the second least bit of $A$ is $1$, XOR $xB$ to $C$; to calculate $xB$, use the method above. And so on, until you get to $a_7$. Return $C$. In practice, you implement it as a loop: Initialize $C = 0$. If $LSB(A)=1$, $C = C + B$. Set $B = xB$, $C=C/2$ (i.e. shift $C$ right once). Repeat the previous two steps $7$ more times. In a real implementation, this multiplication table is stored in some condensed form. At the very least, you store the table of multiplication by $x$. The other extreme is storing all $2^{16}$ possible products. In between, you can store the product of any $A$ by any $2$-bit $B$ (size $2^{10}$ bytes), any $3$-bit $B$ (size $2^{11}$ bytes) or any $4$-bit $B$ (size $2^{12}$ bytes). It all depends on how much memory you can spare, and on the trade-off between memory access (i.e. cache sizes) and ALU performance.
Strange thing about Weak Maximum\Minimum Principle?
You seem to infer, from continuity of $u$, that every number between $\inf_{\partial\Omega}u$ and $\sup_{\partial\Omega}u$ is in $u(\partial\Omega)$. That won't work without some connectedness assumption.
$ f \left( f ( x ) ^ 2 + y \right) = x ^2 + f ( y ) $ converts into Cauchy.
For the sake of completeness, I will also repeat the arguments for the parts you've already achieved. Let $ f : \mathbb R \to \mathbb R $ satisfy $$ f \left( f ( x ) ^ 2 + y \right) = x ^ 2 + f ( y ) \tag 0 \label 0 $$ for all $ x , y \in \mathbb R $. Setting $ y = 0 $ in \eqref{0} we get $$ f \left( f ( x ) ^ 2 \right) = x ^ 2 + f ( 0 ) \text , \tag 1 \label 1 $$ which implies that if $ f ( x ) = f ( y ) $ then $ x ^ 2 = y ^ 2 $. In particular, if we put $ x = 0 $ in \eqref{1}, we get $ f \left( f ( 0 ) ^ 2 \right) = f ( 0 ) $, and thus $ f ( 0 ) ^ 4 = 0 ^ 2 $, or equivalently $ f ( 0 ) = 0 $. Using this together with \eqref{1} and substituting $ f ( x ) ^ 2 $ for $ x $ in \eqref{0}, we have $$ f \left( x ^ 4 + y \right) = f ( x ) ^ 4 + f ( y ) \text . \tag 2 \label 2 $$ In particular, setting $ y = 0 $ in \eqref{2}, we get $$ f \left( x ^ 4 \right) = f ( x ) ^ 4 \text . \tag 3 \label 3 $$ We can use \eqref{3} to rewrite the right-hand side of \eqref{2}, which will show that if $ x \ge 0 $, then $ f ( x + y ) = f ( x ) + f ( y ) $. Since for any $ x $ we have $ | x | \ge 0 $ and $ | x | + x \ge 0 $, thus we get $$ f ( x + y ) = f \big( ( | x | + x ) + ( y - | x | ) \big) = f ( | x | + x ) + f ( y - | x | ) \\ = f ( | x | ) + f ( x ) + f ( y - | x | ) = \big( f ( | x | ) + f ( y - | x | ) \big) + f ( x ) \text , $$ and hence $$ f ( x + y ) = f ( x ) + f ( y ) \text . \tag 4 \label 4 $$ Now, note that \eqref{3} implies $ f ( x ) \ge 0 $ when $ x \ge 0 $. This means that the function is increasing, as if $ x \le y $, we can substitute $ y - x $ for $ y $ in \eqref{4} to get $ f ( y ) = f ( x ) + f ( y - x ) \ge f ( x ) $. This, together with \eqref{4} implies that letting $ a = f ( 1 ) $, we have $ f ( x ) = a x $ for all $ x \in \mathbb R $. Plugging this into \eqref{1}, you'll find that the only solution is the identity function.
fourier transform - why imaginary part represents the phase shift
The amplitude is $|\hat{f}(\omega)|$ and the phase is in $\text{arg}\ \hat{f}(\omega)$ where $f(\omega) = |\hat{f}(\omega)| e^{i\text{ arg}\ \hat{f}(\omega)}$ The main thing you need to know is that a shift $t \mapsto t-a$ in the time domain is the same as a multiplication by $e^{-i a \omega}$ in the frequency domain What you can do to affect a time localization to portions of the spectrum, is inverse Fourier transform $\hat{g}(\omega)= \hat{f}(\omega) \phi(\omega)$ where $\phi$ rules out every frequencies except those in some interval $[a,b]$, to obtain $g(t)$ and look at $$\frac{1}{\|g\|^2} \int_{-\infty}^\infty t |g(t)|^2dt = \frac{\int_a^b \hat{g}'(\omega)\overline{\hat{g}(\omega)}d\omega}{\int_a^b |\hat{g}(\omega)|^2d\omega}$$ which indicates at which time $g(t)$ has the most energy.
A Cauchy with $\varepsilon$- type inequality for $C^1$ functions
Suppose first that there is some $t_0$ with $u(t_0) = 0$. Then by the fundamental theorem of calculus, we have $$u(t)^2 = 2 \int_{t_0}^t u&#39;(t) u(t)\,dt \le 2 ||u&#39;||_{\infty} ||u||_1.$$ Thus $$|u(t)| \le \sqrt{(2 \epsilon ||u&#39;||_\infty)(\frac{1}{\epsilon} ||u||_1)}$$ So taking the supremum over $t$ and using the AM-GM inequality we get $$||u||_\infty \le \epsilon ||u&#39;||_\infty + \frac{1}{2\epsilon} ||u||_1.$$ Now suppose there is no $t_0$ with $u(t_0) = 0$. By the intermediate value theorem either $u&gt;0$ everywhere or $u&lt;0$ everywhere. By replacing $u$ by $-u$ if necessary we assume $u &gt; 0$ everywhere. Let $t_1$ be the point where $u$ attains its minimum. Set $\tilde{u}(t) = u(t) - u(t_1)$. Note that $\tilde{u}&#39; = u&#39;$ and $||\tilde{u}||_1 = ||u||_1 - u(t_1) \le ||u||_1$. Applying the previous case to $\tilde{u}$ we have $$||u||_\infty = ||\tilde{u}||_\infty + u(t_1) \le \epsilon ||u||_\infty + \frac{1}{2\epsilon} ||u||_1 + u(t_1).$$ Finally, by integrating the inequality $u(t_1) \le u(t) = |u(t)|$, we have $u(t_1) \le ||u||_1$. So putting this together gives $$||u||_\infty \le \epsilon ||u&#39;||_\infty + \left(1 + \frac{1}{2\epsilon}\right) ||u||_1.$$
Explicit examples of functions with flow?
I thought the wikipedia article is pretty straightforward. Define a function $\phi: \mathbb{R}^2 \to \mathbb{R}: (x,t) \mapsto \phi(x,t)$. Now, you want the second parameter $t$ to be interpreted as the number of times you have applied the function to $x$, in a way. To formalize this, you introduce the following rule on $\phi$: $$\phi(\phi(x,t),s)=\phi(x,t+s)$$ and this for all $x,t$ and $s$. In particular, you see that: $$\phi(\phi(x,t),t)=\phi(x,2t)$$ and more generally if we define $\phi_t:\mathbb{R}\to\mathbb{R}:x\mapsto\phi(x,t)$ $$\phi_t^n(x)=\phi(x,nt)=\phi_{nt}(x)$$ which is exactly the behaviour you would like to have. Determining a flow $\phi(x,t)$ starting from the condition that $\phi(x,1)=f(x)$ is not an easy task however and can not be done for any arbitrary function I think. I remember another post related to the question. I think it was a question about the Vieta product.
Lie theory for physicists
I've been in your same situation, and I think this might be the book you're looking for: Physics from Symmetry, by Jakob Schwichtenberg. It explains the fundamental concepts of Lie/representation theory carefully, in a quite intuitive manner, motivated via applications in Physics. There is a chapter dedicated exclusively to Quantum Mechanics, which fundamental principles are derived using mathematical tools alone, but you'll also find discussions of QFT, Electromagnetism and Classical Mechanics. For a more mathematically rigorous approach, I'd recommend Naive Lie Theory by John Stillwell, also an excellent read, which successfully conveys complex ideas in a simple fashion. You'll specially enjoyed the historical notes at the end of the book. Finally, I've read some good reviews of Group Theory and Physics by Shlomo Sternberg, which appears to be a main reference work.
How do you evalutate$ \int_0^\pi sin(m\theta) \frac{\partial^n }{\partial \theta^n} ((cos\theta) ^2-1)^n d\theta$ with m>n and m+n =odd integer
Integrating by parts we get $$ A=\int_0^\pi \sin(m\theta) \frac{d^n }{d \theta^n} [(\cos^{2}\theta -1)^n] d\theta\\ =\sin(m\theta) \frac{d^{n-1} }{d \theta^{n-1}} [(\cos^{2}\theta -1)^n] {\Large{|}}_{0}^{\pi}-m\int_0^\pi \cos(m\theta) \frac{d^{n-1} }{d \theta^{n-1}} [(\cos^{2}\theta -1)^n] d\theta. $$ The quantity $$ \frac{d^{n-k} }{d \theta^{n-k}} [(\cos^{2}\theta -1)^n] {\Large{|}}_{0}^{\pi}=0,\qquad k\in\mathbb{N} $$ since the derivatives $$ \frac{d^{n-k} }{d \theta^{n-k}} [(\cos^{2}\theta -1)^n] ,\qquad k\in\mathbb{N} $$ are proportional to $1-\cos^2\theta$. Finally we get $$ A=\int_0^\pi \sin^{2n}\theta\frac{d^n }{d \theta^n}\sin(m\theta) d\theta. $$ Now depending whether $n$ and $m$ are odd or even you can calculate this by applying binomial theorem to obtain the integral as a finite sum.
Why can't there be a third minimal normal subgroup?
There is a result that the centralizer of the image of the left regular permutation representation of a group $G$ is equal to the image of the right regular representation, and the claim follows from this, which is related to the fact that there are only two regular representations. So, if there are two minimal normal subgroups then they must be the images of the left and right regular representations of some group, which must be the direct product of copies of a nonabelian simple groups. The smallest such example has degree 60 with $N \cong K \cong A_5$.
Recursive definition of recursively defined operations
As Stephen points out, these operations are given by the three-argument Ackermann function, plugging in 0 (for addition), 1 (for multiplication), etc. to the third argument. Another notation used for the same thing is Knuth's up-arrow notation: it starts with $a \mathbin{\uparrow} b$ to denote $a^b$ and continues by denoting the next functions in the sequence with multiple arrows: $a \mathbin{\uparrow\uparrow} b$, $a \mathbin{\uparrow\uparrow\uparrow} b$, etc.