title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
first homolgy group of a disk with $n$ holes
It is true, the fundamental group of this space is the free group generated by $n$ elements since it retracts to the bouquet of n circles, so its first homology which is the abelianization of its fundamental group is $Z^n$.
Question in the proof of Theorem 7.4 from The Arithmetic of Elliptic Curves
My professor told me this is how it is probably being extended: Fact: $K$ be a field of characteristic different from 2. Then there is a 1-1 correspondence between quadratic forms and symmetric bilinear forms on a finite dimensional vector space over $K$ given by: $Q_B(x)=B(x,x)$ and $B_Q(x,y)=1/2(Q(x+y)-Q(x)-Q(y))$. Moreover this correspondence preserves the property of being positive definite. Now $deg:M\rightarrow \mathbb{Z}$ is a quadratic form(from corollary III.6.3 of the book). We can define using this, a bilinear form $B=2B_{deg}:M\times M \rightarrow \mathbb{Z}$(multiplying by 2 so that it lands in $\mathbb{Z}$, no 1-1 correspondence being used here we are just defining a bilinear form using the above formula). Choosing a basis $\{e_i\}$ for $M$, we note that $B$ can be written as a matrix. Choosing $\{e_i\otimes 1\}$ as a basis for the vector space $V=M\otimes \mathbb{R}$, define a bilinear form on $V$ by the matrix $A=B/2$(can divide by 2 in $\mathbb{R}$). Then by the correspondence above, we get a quadratic form $Q_A$ on $V$ which restricts to the original quadratic form, deg, on $M\subseteq V$. Moreover $Q_A$ is continuous as it is given by $Q_A(x)=x^t(B/2)x$ for a column vector $x\in (V,\{e_i\})\simeq (\mathbb{R}^n,\{e_i\})$ and is thus a polynomial in the coordinates.
How to define a profinite morphism
A profinite morphism presumably means something like a projective limit of finite morphisms. Since finite morphisms are affine, we can think what this would mean in terms of maps of affine schemes. An inductive limit of finite maps $R \to S_i$ is the same as a map $R \to S$ where $S$ is integral over $R$, so this leads me to guess that a profinite morphism is a morphism $X \to Y$ which is affine, and so that on affine opens Spec $R$ in $Y$, the preimage in $X$ is of the form Spec $S$ with $S$ integral over $R$. This is compatible with exercise 1 (b), and exercise 5, on the assignment that you link to. Note, though, that as others have already pointed out, assuming that my guess is correct, this terminology is not standard, or at least not common. More typically, such morphisms are called integral. (This is the terminology used in Ravi Vakil's notes, and presumably also in the stacks project, and I also presume it is the terminology used in EGA.)
induced maximal ideals in ring of functions
Not only one, but sometimes many. For each $x\in X$ and maximal ideal $M$, you can consider the subset of functions $f$ such that $f(x)\in M$. If you check, you'll find this is a maximal ideal of the set of functions. It is not always the case that these are all the maximal ideals, though.
Books, sites, guides about mental arithmetic by hand and tricks?
One classic text is The Trachtenberg Speed System, which actually has an amazing history: Trachtenberg created the tools for the book (speed mathematics, including multiplication) while in a Nazi concentration camp, without the use of paper or a writing utensil. The Wikipedia page gives a pretty good overview of the book, and there is a bit about Jakow Trachtenberg on Wikipedia as well. p.s. Don't let the \$91 price on Amazon.com worry you; you can find many used copies for less than \$10.
What is the derivative of $f: \mathbb C \to \mathbb R$ where $f(z)=z\bar z$?
Note that this function is not holomorphic, i.e. the limit $$ \lim_{h \to 0} \frac{f(z + h) - f(z)}{h} $$ does not exist. You may consider the function $\mathbb{R}^2 \to \mathbb{R}$ given by $(x, y) \mapsto x^2 + y^2$. This function is differentiable and its Jacobian is $\begin{pmatrix} 2x & 2y\end{pmatrix}$.
Finding parameter for quadratic equation
By the factorisation, $$(x_1^4-x_2^4)=(x_1+x_2)(x_1-x_2)(x_1^2+x_2^2)$$ We get $$(x_1^4-x_2^4)=(3a)[\sqrt{(x_1+x_2)^2-4x_1x_2}](9a^2-2a^2)$$ $$\implies (x_1^4-x_2^4)=21a^3\sqrt{5}a$$ Can you continue?
Definite integral convergence answer check
Here is what I get, using the Taylor Series approximations for the various functions involved, we have : $\left ( \frac{(1-\cos x)^{a}}{\ln (1-x)} \right )\left ( \frac{(1-x)^{3}}{\ln (1-x))} \right )\approx \left ( \frac{x^{2a}}{2^{a}(-x-(\frac{x^{2}}{2}))} \right )\left ( \frac{(1-x)^{3}}{(-x-(\frac{x^{2}}{2}))} \right )\approx \left ( \frac{-x^{2a}}{2^{a}} \right )\left ( \frac{1-3x}{-x} \right )=(1-3x)\frac{x^{2a-1}}{2^{a}}\approx \frac{x^{2a-1}}{2^{a}}$ The integral is then approximately $\frac{1}{2^{a}}\lim _{m\rightarrow 0^{+}}\int_{m} ^{1/2}x^{2a-1}dx=\lim _{m\rightarrow 0^{+}}\frac{1}{2a2^{a}}\left ( 1/2-m^{2a} \right )$ so the integral converges when $a>0$.
Finding $\int^{\infty}_{0}\frac{\ln^2(x)}{(1-x^2)^2} dx$
You're definetly on the right track with that substitution of $x=\frac1t$ Basically we have: $$I=\int^{\infty}_{0}\frac{\ln^2(x)}{(1-x^2)^2}dx=\int_0^\infty \frac{x^2\ln^2 x}{(1-x^2)^2}dx$$ Now what if we add them up? $$2I=\int_0^\infty \ln^2 x \frac{1+x^2}{(1-x^2)^2}dx$$ If you don't know how to deal easily with the integral $$\int \frac{1+x^2}{(1-x^2)^2}dx=\frac{x}{1-x^2}+C$$ I recommend you to take a look here. Anyway we have, integrating by parts: $$2I= \underbrace{\frac{x}{1-x^2}\ln^2x \bigg|_0^\infty}_{=0} +2\underbrace{\int_0^\infty \frac{\ln x}{x^2-1}dx}_{\large =\frac{\pi^2}{4}}$$ $$\Rightarrow 2I= 2\cdot \frac{\pi^2}{4} \Rightarrow I=\frac{\pi^2}{4}$$ For the last integral see here for example.
Confusion in mapping from s to F(s) plane
Here is a very informal answer. The question is (I presume) about the relationship between poles, zeroes, angles and encirclements. The quantity of interest is ${f' \over f}$. Informally, we have $\log f(s) = \log |f(s)| + i \operatorname{arg} f(s)$, so $\operatorname{im} \log f(s) =\operatorname{arg} f(s)$ and differentiating the left hand side gives $\operatorname{im} {f'(s) \over f(s)}$. So, from an intuition perspective, we expect that integrating $\operatorname{im} {f'(s) \over f(s)}$ over $\gamma$ (and taking the imaginary part) will give the change in $\operatorname{arg} f(s)$ as $f$ 'moves' along $\gamma$. So, now suppose $f$ has the form $f(s) = (s-a)^p g(s)$ where $g$ is analytic at $a$ and $p$ is a (possibly negative) integer then ${f'(s) \over f(s)} = {p \over s-a} + {g'(s) \over g(s)}$. Then we see that (for sufficiently 'small' $\gamma$) that ${1 \over 2 \pi i} \int_\gamma {f' \over f} dz = p + {1 \over 2 \pi i} \int_\gamma {g' \over g} dz$, so the $(s-a)^p$ term contributes $p$ encirclements (if $p<0$ the count in the oppositive direction). The relevant result is the argument principle ${1 \over 2 \pi i} \int_\gamma {f' \over f} dz = \sum_i \eta(\gamma, z_i) - \sum_j \eta(\gamma, p_j)$.
How do you go about simplifying this such that you can use the geometric series equation.
Hint: $$S=\sum_{c=0}^\infty ca^c=\sum_{c=1}^\infty ca^c=\sum_{c=0}^\infty(c+1)a^{c+1}=aS+a\sum_{c=0}^\infty a^c.$$
Condition implying that $E(X|\mathcal{C}_1)=E(X|\mathcal{C}_2)$ when $\mathcal{C}_1\subseteq\mathcal{C}_2$
What is $E[X|\mathcal C_1]$? It is any random variable $Z$ that satisfies $Z$ is $\mathcal C_1$-measurable. $$\int_{C_1} E[X|\mathcal C_1] d\mathbb P = \int_{C_1} Z d\mathbb P$$ $$\forall C_1 \in \mathcal C_1$$ Is $E[X|\mathcal C_2]$ one of those $Z$'s? $E[X|\mathcal C_2]$ is $\mathcal C_1$-measurable by assumption. (*) 2. $$LHS = \int_{C_1} E[X|\mathcal C_1] d\mathbb P$$ $$ = \int_{\Omega} 1_{C_1} E[X|\mathcal C_1] d\mathbb P$$ $$ = \int_{\Omega} E[X1_{C_1}|\mathcal C_1] d\mathbb P$$ $$ = E[E[X1_{C_1}|\mathcal C_1]]$$ $$ = E[X1_{C_1}]$$ $$RHS = \int_{C_1} Z d\mathbb P$$ $$ = \int_{C_1} E[X|\mathcal C_2] d\mathbb P$$ $$ = \int_{\Omega} 1_{C_1} E[X|\mathcal C_2] d\mathbb P$$ $$ = E[1_{C_1}E[X|\mathcal C_2]]$$ $$ = E[E[X1_{C_1}|\mathcal C_2]]$$ $$ = E[X1_{C_1}]$$ I believe #2 does not make use of #1 unlike below This is the formal way to go about this. I believe there's an intuitive way of thinking about it. And of course there's the shortcut $$E[X|\mathcal C_1] = E[E[X|\mathcal C_2]|\mathcal C_1]$$ $\because E[X|\mathcal C_2]$ is given to be $\mathcal C_1$-measurable $(*)$, we have $$RHS = E[E[X|\mathcal C_2]|\mathcal C_1] \stackrel{(*)}{=} E[X|\mathcal C_2] E[1|\mathcal C_1] = E[X|\mathcal C_2] (1)$$ $(*)$ This is critical. There's no reason for $E[X|\mathcal C_2]$ to surely be $\mathcal C_1$-measurable otherwise because $\mathcal C_2$ is the bigger $\sigma-$field.
How do I find the equation of the sphere given a circle that passes through the sphere and a plane that is tangent to the sphere?
Assume the center of the sphere is $(0,0,t)$, then the equation of the sphere is $$ x^2+y^2+(z-t)^2=t^2+11 $$ Using the formula of Distance from point to plane, we have $$ d= \frac{1\cdot 0+1\cdot 0+1\cdot t-5}{\sqrt{1^2+1^2+1^2}} = \dfrac{t-5}{\sqrt{3}} $$ Since it is tangent to the plane, the following relation holds: $$ d = r = \sqrt{t^2+11} $$ Simplify, and we get $$ \left( \frac{t-5}{\sqrt{3}} \right) ^2=t^2+11\\ \dfrac{t^2-10t+25}{3}=t^2+11\\ t^2-10t+25=3t^2+33 \\ 2t^2+8=-10t \\ t^2+5t+4=0 $$ Thus $$ t=-1 \text{ or } t = -4 $$ We are done.
Gradient of a vector field
The gradient of a vector field in ordinary $R^3$ is a tensor field. You will need to use the tensor product to represent it $\otimes$. $$ \nabla V = \left( \hat{e}_i \frac{\partial}{\partial x_i} \right ) (V_m \hat{e}_m) \\ = \frac{\partial V_m}{\partial x_i} \hat{e}_i \otimes \hat{e}_m $$
Extracting a ball from an urn, introducing it into the second. Expected value=?
You have a $\frac{6}{13}$ chance of drawing a white ball from the first urn which would give the second urn $11$ white and $5$ black balls. You have a $\frac{7}{13}$ chance of drawing a black ball from the first urn which would give the second urn $10$ white and $6$ black balls. $$ \overbrace{\frac{6}{13}\underbrace{\frac{11}{16}5}_{\substack{\text{expected}\\\text{number}\\\text{of white}\\\text{drawn}}}}^{\text{white drawn from first urn}}+\overbrace{\frac{7}{13}\underbrace{\frac{10}{16}5}_{\substack{\text{expected}\\\text{number}\\\text{of white}\\\text{drawn}}}}^{\text{black drawn from first urn}} $$
primes of type norm($a+b\sqrt{-c}$) = primes of type $1$ $mod$ $c 2^{n-1}$?
It's not always 1 mod something, by some miracle it is completely described by a congruence. The key to this question is quadratic reciprocity. I recommend reading either Conway's explanation of Zolotarev proof or Eisenstein's lattice point proof to start with. -1 is a square mod p, when p is equal to 1 mod 4 by quadratic reciprocity. See Fermat's christmas theorem and the many proofs, but in particular the proof using geometry of numbers is relevant. When -1 is a square mod p, p can be "split" into the product of two conjugate gaussian integers: i.e. it can be written as the norm of a gaussian integer. This same idea applies for all quadratic integers (there are some tricky parts you need to deal with). You can learn about quadratic integers more on KConrad's page http://www.math.uconn.edu/~kconrad/blurbs/
If an exradius of a triangle is the sum of the two other exradii and the inradius, then the triangle is ...
In right $\triangle ABC$, with $\angle C=90^\circ$, and standard definitions for $a,b,c,s,r_a,r_b,r_c$, we have \begin{align*} r_a&=s-b\\ r_b&=s-a\\ r_c&=s. \end{align*} These follow from the fact that the distance from a vertex to the point of tangency between a side and the opposite excircle is always $s$. Also, we have $$r=\frac{a+b-c}{2}=s-c.$$ The proof is a fun little exercise involving Equal Tangents. Then, $$r_c=s=3s-2s=(s-b)+(s-a)+(s-c)=r_a+r_b+r,$$ so the answer is $\boxed{\mathbf{(4)}\text{ right-angled}}$.
Show that $f_n(x)=\sqrt n\, x^n$ diverges in $\big( {\cal C}[0,1], d_2\big)$
The point-wise limit is $0$ for $x <1$. If the sequence converges in $d_2$ the limit has to be $0$. But $\int|f_n(x)|^{2} dx =\frac n {2n+1} \to \frac 1 2$. Hence the sequence does not converge. $e^{t} \geq t$ for all $t \geq 0$. Put $t=n\ln (\frac 1 x)$ to get $\frac 1 {x^{n}} \geq n\ln (\frac 1 x)$ or $0 \leq \sqrt n x^{n} \leq \frac 1{ \sqrt n \ln (\frac 1 x)}$. Hence the point-wise limit is $0$. Alternative proof of almost everywhere convergence: If $y >1$ the $y^{n} =(1+(y-1)^{n}) >n(y-1)$ by Binomial expansion of $(1+(y-1))^{n}$. Hence $\frac {\sqrt n} {y^{n}} <\frac {\sqrt n} {n(y-1)} \to 0$. Put $y=\frac 1 x$ to see that $f_n(x) \to 0$ for all $x \in [0,1)$.
Showing that if $f=g$ a.e. on a general measurable set (for $f$, $g$ continuous), it is not necessarily the case that $f=g$.
I am not normally a cat person but this one and I are old friends now. So I think I should jump into the litter box and complete the task we have been set. What is the definitive answer to this problem that was apparently given to all the kittys in a graduate class somewhere? Here is a way to formulate and answer the problem. Definition. A set $E$ of real numbers is said to be a purrfect set if whenever $f$, $g:E\to R$ are continuous functions and $N\subset E$ is a set of Lebesgue measure zero for which $f(x)=g(x)$ for all $x\in E\setminus N$ it follows that $f(x)=g(x)$ for all $x\in E$. Don't confuse purrfect with perfect. $[0,1]$ is a purrfect set [proved by J. Cat]. $[0,1] \cup \{2\} $ is not a purrfect set [checked by J. Cat]. Every open set is a purrfect set [method of J. Cat works]. Every set open in the density topology is a purrfect set. Naturally the problem is not complete until we characterize purrfect sets. The following does the job giving a paw-sitive solution to the originally posed problem. Theorem. A necessary and sufficient condition for a set of real numbers $E$ to be a purrfect set is that for each $x\in E$ and for each $\epsilon>0$ $$m(E\cap(x-\epsilon,x+\epsilon))>0$$ where $m$ is Lebesgue outer measure. We can leave this as an exercise for all you Cool Cats since the methods should be clear. Note that purrfect sets are kind of thick and furry at each point. In case you think that purrfect sets might be of little interest and that the instructor who set the problem should be charged with animal cruelty let me leave you with another problem. Problem. Suppose that $F:R\to R$ is an everywhere differentiable function. Show that the set $$\{x: a<F'(x)<b\}$$ is a purrfect set for any $a$ and $b$. [My apologies: someone seems to have run my posting through the web site Kittify hence all the cat puns. Too late to edit them out.]
Does the following limit exist and if it exists, what is the answer?
Verify the inequality $\frac{|x|y^2}{x^2+y^2} \le |x|$
Finding triangular matrix
The vector $(1,2,2)$ is an eigenvector of $M$ corresponding to the eigenvalue $3$ and the vector $(0,-1,1)$ is an eigenvector of $M$ corresponding to the eigenvalue $2$. Now, consider the equation$$M.(x,y,z)=2(x,y,z)+(0,-1,1).$$One solution of this system is $(1,3,0)$. So, if $v_1=(1,2,2)$, $v_2=(0,-1,1)$, and $v_3=(1,3,0)$, you have $M.v_1=3v_1$, $M.v_2=2v_2$, and $M.v_3=2v_3+v_2$. So, take$$P=\begin{bmatrix}1&0&1\\2&-1&3\\2&1&0\end{bmatrix}$$(the columns of $P$ are the vectors $v_1$, $v_2$, and $v_3$). Then$$P^{-1}MP=\begin{bmatrix}3&0&0\\0&2&1\\0&0&2\end{bmatrix}.$$
The Rotate and Shift operations in a Finite Field
There is no expression in what you call $GF_2$ per se for shifting a sequence of bits (elements of $GF_2$) cyclically (or rotating it) for the simple reason that the sequence is not an element of $GF_2$; the individual components (bits comprising the sequence) are in $GF_2$, but not the sequence itself. Now, if you interpret the $n$-bit sequence $(c_{n-1},c_{n-2}, \ldots, c_1, c_0)$ as the integer $C = \sum_{i=0}^{n-1} c_i 2^i$, then it is true that a rotation by $k$ places, $0 \leq k < n$, that is, $$(c_{n-1},c_{n-2}, \ldots, c_1, c_0) \longrightarrow (c_{n-k-1}, c_{n-k-2}, \ldots, c_0,c_{n-1}, c_{n-2}, \ldots, c_{n-k})$$ gives the sequence corresponding to the integer $D^{(k)}$ where $$D^{(k)} = \begin{cases}2^kC \bmod (2^n-1), & C \neq 2^n-1,\\ 2^n-1, & C = 2^n-1,\end{cases}$$ but this is not an operation in the finite field $GF_2$. Note that the special calculation for $(1,1,\ldots 1) \leftrightarrow C=2^n-1$ is needed to avoid getting a $0$ result when the mod $2^n-1$ operation is carried out on $2^k(2^n-1)$. Try it, you will like it. Your $ROT[(0,1,1,1),2]=(1,1,0,1)$ corresponds to $7$ being transformed to $2^2\times 7 = 28 \equiv 13 \bmod 15.$ Alternatively, if you interpret the sequence as the coefficients of a polynomial $C(x) = \sum_{i=0}^{n-1}c_i x^i$, then rotating by $k$ bits results in the bit sequence corresponding to the polynomial $$D^{(k)}(x) = x^k C(x) \bmod (x^n-1)$$ where both $C(x)$ and $D^{(k)}(x)$ belong to a mathematical structure denoted by $\mathbb F_2[x]$ and called the ring of polynomials in $x$ with coefficients in $\mathbb F_2 = GF_2$. Once again, these polynomials are not "in" $GF_2$.
Geometric representation of $\mathbb{Z}_n^\times$ .
The multiplicative group of integers modulo $n$ can be visualized by constructing its cycle graph, see here.
Consecutive Vertices of a Quadrilateral
If $\frac{AD}{AB}>\frac{1}{2}$ we don't get a quadrilateral because $GF$ intersects with $DE$; If $\frac{AD}{AB}=\frac{1}{2}$ so $D\equiv G$ and we don't get equilateral again. If $\frac{AD}{AB}<\frac{1}{2}$ so it's valid. Now, we see that A) and E) are not relevant.
What is the reason direct product of injective modules are injective, while direct sum not necessarily?
I think the question is more subtle than first appears. Although I cannot give you a complete characterisation of when a product of projectives is projective, I can answer the question for a fairly interesting class of rings: It obviously works for some rings, like fields. More generally: One sufficient condition for a product of projectives to be projective is that $R$ is quasifrobenius. That is, $R$ is left and right noetherian and injective as a left $R$ module and as a right $R$-module. Semisimple artinian rings and $R[G]$ where $R$ is quasifrobenius (e.g. field) and $G$ is a finite group are examples, which shows that there are interesting cases beyond fields. The important property of these rings is that an $R$-module over a quasifrobenius ring $R$ is projective iff it is injective. Hence a direct product of projectives is a direct product of injectives, and hence injective, hence projective. Of course, there are still other questions to ask: if every product of projectives is projective, is the underlying ring quasifrobenius (I'd guess not), and given a specific ring, what conditions on the factors will ensure projectivity? P.S. Duality in formulating conditions works well for definitions in the category of modules but not with respect to the original ring because the opposite category of the category $R$-Mod is not $S$-Mod for any $S$, and certainly not any $S$ "dually related" to $R$, unless of course $R = 0$.
Unclear theorem about vector spaces
What this theorem is saying, essentially, is that, given one basis of a finite-dimensional space, you can swap out one of the basis vectors for another, provided the new vector is a linear combination of the old set: the new set will still be a basis. You'll notice this is a Lemma, not a Theorem: I expect the authors are moving towards some theorems involving change-of-basis, which is quite important in linear algebra. This particular Lemma is not that important by itself except as a stepping-stone to change-of-basis. Remember what a basis is: a set of linearly independent vectors such that any arbitrary vector is a linear combination of vectors in the set.
Doubt in application of Weierstrass Theorem - Showing that $D$ is compact and $f$ is continuous
On D being closed: Two methods: $(\;(x_n,y_n)\;)_n$ converges to $(x,y) \implies (\;(x-x_n)^2+(y-y_n)^2\;)_n$ converges to $0\implies (\lim_n x_n=x$ and $\lim_ny_n=y)\implies (\lim_n x_n^2=x^2$ and $\lim_ny_n^2=y^2)$. If $(x_n,y_n)\in D$ and $(\;(x_n,y_n)\;)_n$ converges to $(x,y)$ then $$|x^2+y^2-1|=|(x_n^2+y_n^2-1)+(x^2-x_n^2)+(y^2-y_n)^2 |=$$ $$=|(x^2-x_n^2)+(y-y_n^2)|\leq |(x^2-x_n^2)|+|(y^2-y_n^2)|$$ which is $<r$ for any $r>0$ if $n$ is big enough. So $|x^2+y^2-1|$ is less than any $r>0$, so $x^2+y^2-1=0$. So $(x,y)\in D.$ Another method: Show that the complement of $D$ is open. With $\|(x,y)\|=\sqrt {x^2+y^2}$ we have the Triangle Inequality: For $u,v\in \mathbb R^2$ we have $\|u+v\|\leq \|u\|+\|v\|.$ If $u\in \mathbb R^2$ with $\|u\|=1+s\ne 1$ then the open ball $B(u,|s|/2)$ is disjoint from $D$ because (i). If $s<0$ then $$v\in B(u,|s|/2)\implies \|v\|=\|(v-u)+u\|\leq \|v-u\|+\|u\|<$$ $$<|s|/2+(1+s)=|s|/2+(1-|s|)=1-|s|/2< 1.$$ (ii). If $s>0$ then $$v\in B(u,|s|/2)\implies \|v\|=\|u-(u-v)\|\geq \|u\|-\|u-v\|>$$ $$>(1+s)-|s|/2=(1+|s|)-|s|/2=1+|s|/2>1.$$
Given subspace $S\subset\mathbf{R}^n$, construct the matrix $A$ such that $S=\{x\mid Ax=0\}$
Let $(e_1,\ldots,e_k)$ be a basis of $S$. Complete it, getting a basis $b=(e_1,\ldots,e_n)$ of $\mathbb{R}^n$. Consider the linear map $f$ from $\mathbb{R}^n$ into itself such that $f(e_i)=0$ if $i\leqslant k$ and that $f(e_i)=e_i$ otherwise. Then the matrix of $f$ with respect to the standard basis of $\mathbb{R}^n$ is a solution of your problem.
sum of perpendiculars of a regular 24 sided shape inscribed in a circle
Well Blue provided the answer with the series, which written out in 'longhand' and letting $r=1$is: $$\sin \frac{0\pi}{12} = 0$$ $$\sin \frac{1\pi}{12} = \frac{\sqrt{6}-\sqrt{2}}{4}$$ $$\sin \frac{2\pi}{12} = \frac{1}{2}$$ $$\sin \frac{3\pi}{12} = \frac{\sqrt{2}}{2}$$ $$\sin \frac{4\pi}{12} = \frac{\sqrt{3}}{2}$$ $$\sin \frac{5\pi}{12} = \frac{\sqrt{6}+\sqrt{2}}{4}$$ $$\sin \frac{6\pi}{12} = 1$$ $$\sin \frac{7\pi}{12} = \frac{\sqrt{6}+\sqrt{2}}{4}$$ $$\sin \frac{8\pi}{12} = \frac{\sqrt{3}}{2}$$ $$\sin \frac{9\pi}{12} = \frac{\sqrt{2}}{2}$$ $$\sin \frac{10\pi}{12} = \frac{1}{2}$$ $$\sin \frac{11\pi}{12} = \frac{\sqrt{6}-\sqrt{2}}{4}$$ $$\sin \frac{12\pi}{12} = 0$$ Which sums to $2 + \sqrt{2} + \sqrt{3} +\sqrt{6}$ and for any value of $r$ is $r ( 2 + \sqrt{2} + \sqrt{3} +\sqrt{6} )$
Give a grammar complement to a non-CFL language
This is a somewhat more complicated version of a classic problem: show that the complement of this language is context-free: $$L = \{ ww':w,w'\in \Sigma^*, w = w'\}$$ The key part of the answer is to divide the string $ww'$ in a different way: $$ww' = xw_ix' yw'_iy' \text{ where }|x|=|x'|\text{ and }|y|=|y'|$$ This decomposition works because $|xy'|=|x'y|$. And each of the parts $xw_ix'$ and $yw'_iy'$ is pretty clearly context free ($W_i\to w_i \mid SW_iS; S\to s_i \text{ for each }s_i\in\Sigma$). Although it can't help prove that $w=w'$ (which requires that every $w_i=w'_i$), it can produce the set of strings where some $w_i\ne w'_i$. That requires $O(|\Sigma|^2)$ productions to enumerate all the pairs of unequal symbols ($L\to W_iW_j\text{ for all }0\lt i,j\le|\Sigma|, i\ne j$) In your problem, you have the additional wrinkle of $b$; however, that just increases the number of cases to enumerate to $O(|\Sigma|^3)$.
Finding intervals of increase/decrease after fundamental theorem?
$$F'(x)=\frac9{\ln(2x)}$$ Since $\ln x \ge 0$ for $x \ge 1$, $F'(x) > 0$ for all $x \ge 3$. $$F''(x)=-\frac9{x\ln^2(2x)}$$ Since $\ln x \ge 0$ for $x \ge 1$ and $\frac1x \ge 0$ for $x \ge 0$, $F''(x)$ is convex for all $x \ge 3$.
Pointwise and Uniform convergence with one-sided limit
You need to add some assumptions on $g$; otherwise a counterexample would be $$ X=[0,1]\qquad g(x,y)=\begin{cases}0& x=0 \\ y/x & x\ne 0 \end{cases}$$ However, if $g$ is continuous, then for any $\varepsilon>0$, the set $g^{-1}((-\varepsilon,\varepsilon))$ is open as a subset of $X\times \mathbb R$ and contains the $x$-axis. At each point on the $x$-axis we can choose a nontrivial ball that is a subset of this preimage, and the radii can be chosen to vary continuously. Since $X$ is compact, there is a minimum radius, which can be used as a uniform $\delta$. (Also, if $g$ is continuous, the pointwise limit doesn't need to be specified explicitly, just that $g(x,0)=0$ for all $x$).
Probability, balls from boxes
To summarize callculus' suggestion: X1: 1b(black), 1w(white) from the 1st box. X2: 2b from 1st. X3: 2w from 1st. Y: 1b from the 2nd box. Then P(X1|Y)=$\frac{P(Y|X1)P(X1)}{P(Y)}$, where: P(Y|X1)= $\frac{19}{34}$ P(Y) = $\sum_{i=1}^3 P(Y|Xi)P(Xi)$ P(X1)= $\frac{C_1^{15}C_1^{12}}{C_2^{27}}$ P(X2)= $\frac{C_2^{15}}{C_2^{27}}$ P(X3)= $\frac{C_2^{12}}{C_2^{27}}$ Hope I didn't type something wrong. You should be able to figure out the computations. BTW, when you find any probability greater than 1, something is definitely wrong.
Recursive formula for points of algebraic curves over finite fields
Surely you mean $$N_f(p,k)=p^k+1-\sum_{i=1}^{2g}w_i^k?$$ If $b_1,\ldots,b_m$ and $c_1,\ldots,c_m$ are any numbers and we define $$a_n=\sum_{k=1}^m b_kc_k^n$$ then the sequence $(a_n)$ satisfies the recurrence $$a_n=-\sum_{k=1}^m u_ka_{m-k}$$ where $$\prod_{j=1}^m(X-c_j)=X^m+\sum_{j=1}^n u_jX^{m-j}.$$ In your example, the $c_k$ are $p,1,w_1,\ldots,w_{2g}$ and the $b_k$ are $1,1,-1,\ldots,-1$.
Asking for reference when is a smooth image of a surface ( / smooth manifold) is again a surface (/smooth manifold)
An example of a classical statement is one in which you encounter during your first course in general topology, namely; ''If $f: X \to Y$ is a continuous bijection with $X$ compact and $Y$ Hausdorff, then $f$ is a homeomorphism." Also look here: How to use the corollary (manifold version rank theorem) to prove the quoted theorem?
I need help with this factorization problem?
$$1-8xy-x^2-16y^2=1-(x^2+(4y)^2+2x\cdot 4y)=1^2-(x+4y)^2=(1+x+4y)(1-x-4y)$$
Show that if $\frac{1}{\mu\left(E\right)}\cdot\int_{E}fd\mu\in C$ for all $E$ with $\mu\left(E\right)>0$ then $f\left(x\right)\in C$ almost surely
If $\mu(f^{-1}(\mathbb{R}\setminus C)) > 0$, then, since $\mathbb{R}\setminus C$ is the disjoint union of countably many open intervals, there is an open interval $(a,b)$ in the complement of $C$ with $\mu(f^{-1}(a,b)) > 0$. Then there is a compact interval $[\alpha,\beta] \subset (a,b)$ with $\mu(f^{-1}([\alpha,\beta])) > 0$. For $E = f^{-1}([\alpha,\beta])$, we then have $$\frac{1}{\mu(E)}\cdot\int_E f\,d\mu \in [\alpha,\beta],$$ which contradicts the assumption that the averages always lie in $C$.
Determine isolated singularities of $g(z)=\frac{\sin(z)}{1-\tan(z)}$
The isolated singularities of $g(z)=\frac{\sin(z)}{1-\tan(z)}$ occur at the zeroes of $1-\tan(z)$, which lead to simple poles of $g$. Note that there are also singularities at the zeroes of $\cos(z)$ (i.e., $z=\frac{\pi}{2}+n\pi$) since $\tan(z)$ is undefined there, but these are removable since we may define $g$ as $0$. And once removed, $g$ is analytic in a neighborhood of $z=\frac{\pi}{2}+n\pi$. As you correctly identified, the poles of $g$ are located at values of $z$ such that $\tan(z)=1$. These values are roots of $e^{i2z}=i$ which implies that $$z=\frac\pi4+n\pi$$ for all integers $n$. We can construct Laurent series in each annulus for which $\frac\pi4+n\pi<|z|<\frac\pi4+(n+1)\pi$. Note that when $n=0$, the Laurent series is a Taylor series.
If $Mod_S\cong Mod_R$ and $Mod_R$ is acc, then $Mod_S$ is ACC.
If $F:A\to B$ is an equivalence of categories, show that for all objects $x$ in $A$ the functor induces an isomorphism from the poset of subobjects of $x$ to that of $F(x)$. Now in your situation, the equivalence maps $R$ to $F(R)$, which is a projective generator in $Mod_S$. If $R$ is noetherian, then $F(R)$ is noetherian. Since $F(R)$ is a generator, there is a surjection from a direct sum of copies of $F(R)$ to $S$, and since $S$ is generated by $1$, there is in fact sich a map on a finite direct sum of copies of $F(R)$. It follows that $S$ is noetherian, as it is a quotient of a noetherian $S$-module.
Does the Frobenius endomorphism apply to polynomials?
Yes. Note that $(x+a)^p=\sum_{r=0}^p\binom{p}{r}x^ra^{p-r}=x^p+a^p$ because $p$ divides $\binom{p}{r}$ except when $r=0,p$.
An infimum of a double integral on the unit disk
Let $f$ be any $C^\infty$ function on $[0,1]$ vanishes identically on $[0,\frac12]$ and $f(1) = 1$. For any $\alpha \in (0,1)$, consider following function defined on the unit disk $$u_\alpha(x,y) = f((x^2+y^2)^{\alpha/2})$$ Since $f(t)$ vanishes identically on $[0,\frac12]$, $u_\alpha(x,y)$ is $C^\infty$ on the unit disk. It vanishes at $0$ and $= 1$ on the unit circle. This means $u_\alpha$ is a candidate to take the infimum. Notice $$\begin{align} \int_{x^2+y^2 \leq 1} \left[ \left ( \frac{\partial u_\alpha}{\partial x} \right)^2 + \left ( \frac{\partial u_\alpha}{\partial y} \right )^2 \right]dx dy &= 2\pi \int_0^1 r \left(\frac{df(r^\alpha)}{dr}\right)^2 dr = 2\pi \int_0^1 r \left( \alpha r^{\alpha-1} \frac{df(r^\alpha)}{dr^\alpha}\right)^2 dr\\ &= 2\pi \alpha \int_0^1 r^\alpha \left(\frac{df(r^\alpha)}{dr^\alpha}\right)^2 dr^\alpha = 2\pi \alpha \int_0^1 s \left(\frac{df(s)}{ds}\right)^2 ds \end{align} $$ We find $$0 \le \inf_u \int_{x^2+y^2 \leq 1} \left[ \left ( \frac{\partial u}{\partial x} \right)^2 + \left ( \frac{\partial u}{\partial y} \right )^2 \right]dx dy \le \inf_{\alpha \in (0,1)} \left\{ 2\pi \alpha \int_0^1 s \left(\frac{df(s)}{ds}\right)^2 ds \right\} = 0$$ i.e. the desired infimum is $0$.
show that $\lim_{n\rightarrow\infty}m(V_n)=m(F)$.
Hint: Let $A_n=\{y: d(y,F) <\frac 1 n\}$. Check that $V_n \subseteq A_n$. Note that $A_n$ decreases to $F$ so $m(V_n) \leq m(A_n) \to m(F)$.
Magnitude of $10 \uparrow \uparrow \uparrow 10$
The magnitude of $10↑↑↑10$ can be understood, but not with ordinary power towers. There are at least two things you can do to make such number more accessible: (1) Use powerful and relevant notation. For a simple (but rough) start, you can create a list of milestone numbers using the notation $10↑^{k}n$, with $2≀n≀9$. Note that $10↑^{k}10=10↑^{k+1}2$, so whenever n increases from 9 to 10, you can drop it back to 2 and add another arrow. This way you'll have a continuous progression of numbers, starting with ordinary scientific notation: $10↑2 = 10 \times 10 = 100$ $10↑3 = 10 \times 10 \times 10 = 1000$ $10↑4 = 10 \times 10 \times 10 \times 10 = 10{,}000$ $10↑5 = 10 \times 10 \times 10 \times 10 \times 10 = 100{,}000$ $10↑6 = 10 \times 10 \times 10 \times 10 \times 10 \times 10 = 1{,}000{,}000$ $10↑7 = 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 = 10{,}000{,}000$ $10↑8 = 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 = 100{,}000{,}000$ $10↑9 = 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 = 1{,}000{,}000{,}000$ $10↑10 = 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 \times 10 = 10{,}000{,}000{,}000 = 10↑↑2$ Continuing with the simple power towers: $10↑↑2 = 10^{10}$ $10↑↑3 = 10^{10^{10}}$ $10↑↑4 = 10^{10^{10^{10}}}$ . . . $10↑↑10 = 10^{10^{10^{10^{10^{10^{10^{10^{10^{10}}}}}}}}} = 10↑↑↑2$ And finally, the iterated power towers: $10↑↑↑2 = 10↑↑10$ $10↑↑↑3 = 10↑↑10↑↑10$ $10↑↑↑4 = 10↑↑10↑↑10↑↑10$ . . . $10↑↑↑10 = 10↑↑10↑↑10↑↑10↑↑10↑↑10↑↑10↑↑10↑↑10↑↑10 = 10↑↑↑↑2$ That's the number scale I recommend you use, to "reach" $10↑↑↑10$ without overwhelming yourself. The important thing here is, that by the time you add the third arrow, you aren't working with the actual power-towers anymore. At this point you should forget them altogether, and think solely in terms of double-arrows. Just like you wouldn't bother trying to build a googolplex by multiplying a googol $10$'s together. (2) Familiarize yourself with some actual uses of these numbers. It is very difficult to understand the magnitude of a number, when one doesn't have a few examples of what this number actually means. We understand millions and billions because we have examples of things that these numbers refer to, but what could a number like $10↑↑↑10$ actually be compared to? Fortunately, numbers of this magnitude appear in quite a few areas of mathematics. Besides the obvious "application" in recreational mathematics, such numbers appear naturally in more serious topics such as Goodstein Sequences, Turing Machines, Ramsey Theory and even when dealing with very foundation of mathematics itself. Familiarizing yourself with such examples will make the numbers much more "real" to you. For example, the 5th Goodstein Sequence terminates after roughly $10↑↑↑4$ terms. The 6th one terminates after roughly $10↑^56$ terms. So your $10↑↑↑10$ is somewhere in between the lengths of these two sequences. Once you know what a Goodstein Sequence actually is (and after you play with them a bit), these facts really do help you understand the magnitude of the numbers involved.
Taking derivatives with respect to a matrix
Generally I find it easier to think of derivatives in terms of 'perturbations' rather than as a coordinate wise thing. So, think of finding the derivative as gathering the $h$ terms in the expansion $\phi(b+h)-\phi(b)$. In this case we have $\phi(b) = \operatorname{tr}((Xb-y)^T (Xb-y))$, so $\phi(b+h) = \operatorname{tr}((b+h)^T X^T X(b+h)- 2 y^T X(b+h)+ y^Ty)$ and so we have $\phi(b+h)-\phi(b)= \operatorname{tr}(2b^T X^T Xh- 2 y^T Xh + h^TX^TXh)$. Gathering the terms in $h$ we see that $D\phi(b)(h) = \operatorname{tr}(2b^T X^T Xh- 2 y^T Xh) = 2\operatorname{tr}((b^T X^T X- y^T X)h)$. (Note that I have surreptitiously used the following result above: Since $\operatorname{tr} Z = \operatorname{tr} Z^T$, we have $\operatorname{tr} (Z+Z^T) = \operatorname{tr} (2Z)$.) If we have $D\phi(b)(h) = 0$ for all $h$, we must have $b^T X^T X- y^T X = 0$, or equivalently $X^T(Xb-y) = 0$. Aside: Note that if $L$ is linear (such as trace), then we have $DL(x)(h) = Lh$. Hence $D(L \circ f)(x) (h) = L(D(f(x)(h))$, so the main challenge above is determining the derivative of $b \mapsto (Xb-y)^T (Xb-y)$. Also, note that $\phi(b) = \|Xb-y\|_F^2$, the Frobenius norm squared.
Bijection between cartesian products of sets to a given set of same total cardinality
One certainly exists: just take a lexicographical ordering. If you want something a bit more explicit, then you could try a recursive formula. Name the $n$ sets $S_1, \ldots, S_n$. Define $\phi_1\colon S_1 \to \{1, \ldots, k_1\}$ by: $$ \phi_1(x_1) = x_1 $$ Then for each $r \in \{2, \ldots, n\}$, define $\phi_r\colon S_1 \times \cdots \times S_r \to \{1, \ldots, k_1\ldots k_r\}$ by: $$ \phi_r(x_1, \ldots, x_r) = k_r \cdot (\phi_{r-1}(x_1, \ldots, x_{r-1}) - 1) + x_r $$ It can be shown via the Division Algorithm that $\phi_n$ is a bijection, as desired.
Condition for the existence of a Markov process from the properties of a semigroup $\{T_t\}$
(a) Dynkin is still a pretty standard reference for this kind of material. (b) The existence of a Markov process with transition function $P$ is an immediate consequence of the Kolmogorov extension theorem, yes. But Kolmogorov might give you a process that is not strong Markov, not right continuous, or bad in other ways. To get a "nice" process with transition function $P$, you have to work harder (and perhaps assume more about $P$). The condition given definitely does not imply (1). Consider for example the transition semigroup of one-dimensional Brownian motion, given by $$P(t, x, \Gamma) = \int_{\Gamma} \frac{1}{\sqrt{2 \pi t}} e^{-|x-y|^2/(2t)}\,dy.$$ It's pretty easy to verify that this satisfies Stroock and Varadhan's condition. But it does not satisfy (1): for example, if you take $\Gamma = \{x\}$, we have $P(t, x, \{x\}) = 0$ for all $t > 0$, yet $\delta_x(\{x\})=1$. This also answers your question (c), since the Brownian semigroup $T_t$ is strongly continuous and has $Lf = -f''$ as its generator.
Evolutionary algorithm
I am not aware of any video on this topic but you may 1. check GECCO'2014 tutorial https://www.lri.fr/~hansen/gecco2014-CMA-ES-tutorial.pdf 2. play with a simplified but still core CMA-ES code in MATLAB 'pure CMA-ES' https://www.lri.fr/~hansen/purecmaes.m 3. read 'Adaptive Encoding: How to Render Search Coordinate System Invariant' by N. Hansen, PPSN 2008 (I cannot provide >2 links as a guest)
Matrix form of differential equation, non-diagonalizable
This matrix is in Jordan normal form. If you denote it$A$, the solution is $$\vec x(t)=\exp(At)\vec x(0),$$ so you have to compute the exponential of this matrix. Now $a$ is the sum $D+N$, where $D$ is the diagonal matrix $ \lambda I$ and the nilpotent matrix $N=\begin{bmatrix}0&1&0\\0&0&1\\0&0&0\end{bmatrix}$. Furthermore $D$ and $N$ commute, so $$\exp(D+N)t=\exp Dt\,\exp Nt=\mathrm e^{\lambda t} I\,\exp Nt. $$ Now remember $\exp Nt=I+Nt+\dfrac{N^2t^2}{2}+\dfrac{N^3t^3}{3!}+\dotsm$, and observe that $$N^2=\smash[b]{\begin{bmatrix}0&0&1\\0&0&0\\0&0&0\end{bmatrix}}, \qquad N^3=0,$$ so that $$\exp Nt=I+Nt+\dfrac{N^2t^2}{2}= \smash{\begin{bmatrix}1&t&\frac{t^2}2\\0&1&t\\0&0&1\end{bmatrix}},$$ and finally $$\exp At=\mathrm e^{\lambda t}I\begin{bmatrix}1&t&\frac{t^2}2\\0&1&t\\0&0&1\end{bmatrix}=\begin{bmatrix}\mathrm e^{\lambda t}&te^{\lambda t}&\frac{t^2}2e^{\lambda t}\\0&e^{\lambda t}&te^{\lambda t}\\0&0&e^{\lambda t}\end{bmatrix}.$$
Find all positive integers $n$ that have two prime factors, and such that $n$ has three times fewer divisors than $n^2$
Let $n=p^rq^s$ with $p\ne q$ prime and $r,s\ge 1$. Then $n$ has $\tau(n)=(r+1)(s+1)$ divisors. Similarly $n^2$ has $(2r+1)(2s+1)$ divisors. You need to solve $$ (2r+1)(2s+1) = 3(r+1)(s+1).$$ This is equivalent to $rs-r-s-2=0$, i.e. $(r-1)(s-1)=3$. hence $r=2,s=4$ (or vice versa, which corresponds to switching $p$ and $q$). Thus the smallest example is $n=144$.
proof of a property of modular arithmetic
If $a \equiv b \mod n$ and $a \equiv b \mod n^\prime$, then there are integers $k$ and $k^\prime$ such that $kn = a - b = k^\prime n^\prime$. If $kn = k^\prime n^\prime$ and $(n,n^\prime) = 1$, then what is $(n,k^\prime)$?
If $\sum\limits_{k=1}^n y_k\geq n$ and $\sum\limits_{k=1}^n \frac{1}{y_k}\geq n$, then $\prod\limits_{k=1}^n y_k\geq 1$?
Let $a>0$ be any number. If $n \geq 3$, just pick $y_1=n$ and $y_2=\frac{1}{n}$, $y_3=a$ and $y_4=..=y_n=1$. Then $y_1+\ldots+y_n\geq n$ and $1/y_1+\ldots+1/y_n\geq n$. Anyhow $y_1y_2...y_n =a$. Thus the product can be anything. If $n=1$, then the problem is trivial. If $n=2$, then some counterexamples were already provided. Anyhow, if $a<1$ pick $y_1=2$ and $y_2=\frac{a}{2}$; while if $a>1$, pick $y_1=2a$ and $y_2=\frac{1}{2}$, and again the product is $a$. I let out the trivial case $a=1$. So give those conditions, as long as $n \geq 2$, the product can be any positive number.
Variant of the Truel Question: What is the Probability that A wins if A starts by shooting C?
Let us assume that each shooter tries to maximize their chances of survival. In this case, $A$ and $B$ will always aim for $C$ first; if they do not, they are certain to die if they should hit their opponent and $C$ gets their turn. $C$ will aim for $B$ if both $A$ and $B$ should be alive. We can distinguish the following cases: $A$ kills $C$ and wins the duel with $B$ shooting first $A$ misses $C$, $B$ kills $C$ and $A$ wins the duel with $A$ shooting first $A$ misses $C$, $B$ misses $C$, $C$ kills $B$ and $A$ wins the duel with $A$ shooting first In all other cases, $A$ dies. We find the following probabilities: $P(A~\text{kills}~C~\text{in first turn}) = \frac{1}{3}$ $P(B~\text{wins}, B~\text{starts} | C~\text{dead}) = \frac{2}{3} + \frac{1}{3} \frac{2}{3} P(B~\text{wins}, B~\text{starts} | C~\text{dead}) \iff P(B~\text{wins}, B~\text{starts} | C~\text{dead}) = \frac{6}{7}$ $P(A~\text{wins}, B~\text{starts} | C~\text{dead}) = 1 - P(B~\text{wins}, B~\text{starts} | C~\text{dead}) = \frac{1}{7}$ $P(A~\text{misses}~C) = \frac{2}{3}$ $P(B~\text{kills}~C | A~\text{misses}~C) = \frac{2}{3}$ $P(A~\text{wins}, A~\text{starts} | C~\text{dead}) = \frac{1}{3} + \frac{2}{3} \frac{1}{3} P(A~\text{wins}, A~\text{starts} | C~\text{dead}) \iff P(A~\text{wins}, A~\text{starts} | C~\text{dead}) = \frac{3}{7}$ $P(B~\text{misses}~C | A~\text{misses}~C) = \frac{1}{3}$ $P(C~\text{kills}~B | A~\text{misses}~C, B~\text{misses}~C) = 1$ $P(A~\text{wins} | B~\text{dead}) = \frac{1}{3}$ Putting this all together, we find: $$P(A~\text{wins}) = \frac{1}{3} \cdot \frac{1}{7} + \frac{2}{3} \cdot \frac{2}{3} \cdot \frac{3}{7} + \frac{2}{3} \cdot \frac{1}{3} \cdot 1 \cdot \frac{1}{3} = \frac{1}{21} + \frac{4}{21} + \frac{2}{27} = \frac{59}{189} \approx 0.312$$
Take partial derivative of $\sum_{i=1}^n (y_i - {ae^{x_i^2}} -bx_i^3)^2$ with respect to a
You have to use the chain rule on this \begin{eqnarray*} f(u) &=& u^2 \\ g(a,b,x,y) &=& y_i -ae^{x_i^2} - bx^3_i \end{eqnarray*} so that the above function is a summation over terms looking like $f(g(a,b,x,y))$. Since the derivative is a linear operator you can exchange summation and differentiation to obtain $$\frac{\partial}{\partial a} \sum_i (y_i -ae^{x_i^2} - bx^3_i)^2 = \sum_i \frac{\partial}{\partial a} (y_i -ae^{x_i^2} - bx^3_i)^2$$ The chain rule on the individual pieces states $$\frac{\partial}{\partial a} f(g(a,b,x,y)) = \frac{d f}{d g} \frac{\partial g}{\partial a} =2 (y_i -ae^{x_i^2} - bx^3_i)(-e^{x^2_i})$$ We can then replace the summation to obtain $$\sum_i -2 (y_i -ae^{x_i^2} - bx^3_i)e^{x^2_i} = -2\sum_i(y_i -ae^{x_i^2} - bx^3_i)e^{x^2_i}$$
Prove that if a product$ AB$ of $n\times n$ matrices is invertible, so are the factors $A$ and $B$.
First: your solution isn't actually a solution. By writing $A^{-1}$ you are assuming that $A$ is invertible, which is what you want to prove. One way to do this would be to note that $$ \det(AB) = \det(A)\det(B). $$ If $AB$ is invertible, then $\det(AB)\neq 0$, and so $\det(A)\neq 0$ and $\det(B)\neq 0$. Another way: If $AB$ is invertible, then there is a $C$ such that $$ (AB)C = I $$ That is $$ A(BC) = I $$ So $BC$ is a right inverse of $A$, and so $A$ is invertible. Likewise $B$ is invertible. (See this question/answers for a bit more: If $AB = I$ then $BA = I$).
Non-injective GΓΆdel numbering scheme in Nagel and Newman
In Goedel's original paper, all the single symbols of the formal alphabet were encoded as distinct odd numbers. Since every encoded alphabet sequence begins with a positive power of two, i.e is an even number, for any number which encodes anything at all, if its exponent for two is odd it must encode a formula, whereas if its exponent of two is even the number must encode a sequence of formulae. Note that Nagel and Newman's encoding causes problems by encoding $\vee$ as 2.
Automorphism group homomorphism induced by the mapping between sets
If $K:B \to A$ is another morphism, you get a function $\Phi:Aut(A) \to Aut(B)$ in the following way. $g\in Aut(A)$ maps to $\Phi(g)\in Aut(B)$ with $\Phi(g)(b)=F(g(K(b))$. If you want $\Phi$ to be a homomorphism, you need, for every $g,g'\in Aut(A)$, $$F(gg'(K(b)))=\Phi(gg')(b)=\Phi(g)\Phi(g')(b)=F(g(KFg'(K(b)))).$$ One way to get this equality is to have $KF=Id_A$, which is weaker than asking that $K=F^{-1}$. So the condition that $F$ is an isomorphism can be relaxed to the condition that $F$ admits a retract $K$, i.e., $KF=Id_A$.
A linear map of determinant $1$ cannot decrease the length of all of the vectors in a Parseval frame
The statement $\sum_ic_i\langle x,u_i\rangle u_i=x$ for all $x$ can be expressed in matrix form as $$ \sum_ic_i u_iu_i^t=I_n $$ with $u^t$ representing the transpose of $u$ and $I_n$ is the identity matrix. As $uu^t$ has trace $\lVert u\rVert^2$, we can take the trace of the above expression, $$ \sum_ic_i\lVert u_i\rVert^2=n.\qquad{\rm(1)} $$ For an $n\times n$ matrix $T$, $$ \sum_ic_i(Tu_i)(Tu_i)^t=TT^t $$ and, taking the trace again, $$ \sum_ic_i\lVert Tu_i\rVert^2={\rm tr}(TT^t). $$ The symmetric matrix $TT^t$ has nonnegative eigenvalues summing to its trace and whose product is ${\rm det}(T)^2$. By the AM-GM inequality, $$ \sum_ic_i\lVert Tu_i\rVert^2\ge n\,\lvert{\rm det}(T)\rvert^{2/n}.\qquad{\rm(2)} $$ It is assumed that the $c_i$ are nonnegative so, if $T$ has determinant $1$, then it cannot strictly reduce the size of each $u_i$, since (2) would then contradict (1).
Problem about Riemann lower sum in piecewise-defined function
The evaluation of this Riemann lower sum $L_{2n}$ depends on its definition. Let $x_j=j/n$ with $j=-n,\dots, n$ is a uniform partition of the interval $[-1,1]$ and note that $f$ is decreasing in $[-1,0]$ and increasing in $[0,1]$. Let $f_1(x)=1-x$ and $f_2(x)=x$. 1) The infimum is taken over the half-closed interval $[x_{j-1},x_j)$: $$ \begin{align} L_{2n}&=\frac{1}{n}\sum_{j=-n+1}^{n}\inf_{t\in [x_{j-1},x_j)}f(t) \\ &=\frac{1}{n}\sum_{j=-n+1}^{0}f_1(x_j)+\frac{1}{n}\sum_{j=0}^{n-1}f_2(x_j)=1-\frac{1}{n^2}\sum_{j=-n+1}^{0}j+\frac{1}{n^2}\sum_{j=0}^{n-1}j\\ &=1+\frac{1}{n^2}\sum_{j=0}^{n-1}j+\frac{1}{n^2}\sum_{j=0}^{n-1}j=1+\frac{n(n-1)}{n^2}=2-\frac{1}{n}. \end{align}$$ 2) The infimum is taken over the closed interval $[x_{j-1},x_j]$: $$\begin{align} L_{2n}&=\frac{1}{n}\sum_{j=-n+1}^{n}\inf_{t\in [x_{j-1},x_j]}f(t) \\ &=\frac{1}{n}\sum_{j=-n+1}^{-1}f_1(x_j)+\frac{f_2(x_0)}{n}+\frac{1}{n}\sum_{j=0}^{n-1}f_2(x_j)=\frac{n-1}{n}-\frac{1}{n^2}\sum_{j=-n+1}^{-1}j+\frac{1}{n^2}\sum_{j=0}^{n-1}j\\ &=\frac{n-1}{n}+\frac{1}{n^2}\sum_{j=1}^{n-1}j+\frac{1}{n^2}\sum_{j=0}^{n-1}j=\frac{n-1}{n}+\frac{n(n-1)}{n^2}=2-\frac{2}{n}. \end{align}$$
What does the term "geometry" means in a Banach space?
Yeah, convexity is a geometric property. It often means you study properties of the space by looking at geometric properties of the unit ball or other convex sets. As an important example, there's the notion of dentability. Fix a Banach Space $X$ and closed convex subset $C$. Let $f \in X^* \setminus \lbrace 0 \rbrace$, and suppose $\alpha = \sup f(C) < \infty$. If we define $S$ by $$S = f^{-1}[\beta, \alpha] \cap C$$ for some $\beta < \alpha$, then $S$ is a slice. We say $C$ is dentable if it admits slices of arbitrarily small diameter. We also say $X$ is dentable if every closed bounded non-empty convex subset is dentable. Dentability is considered a geometric property. Slices are very straightforward, in the sense that you're basically just "slicing" off part of the convex set with a hyperplane defined by a functional. Dentability means you want to be able to make it arbitrarily small in diameter, which you can visualise by enveloping in balls of arbitrarily small diameter. Dentability of the space, as it turns out, is equivalent to the Radon-Nikodym property, a property of vector measures, holding on the Banach Space. So, a geometric property implies a measure theoretic property. That's an example of a geometry of Banach Spaces result. A weaker form of dentability, weak$^*$ dentability, is also used to characterise the Mazur Intersection Property (which a space has, if every closed bounded convex subset can be realised as the intersection of balls), as well as define Asplund spaces, on which convex functions have highly desirable differentiability properties. As another example, the Bishop-Phelps theorem, guaranteeing density of support points on closed convex sets is also typically proven geometrically. It's done by intersecting pointed cones with convex sets, in a nested way, so that you obtain a nested family of (weakly) closed sets with null-convergent diameter. Completeness implies a (unique) point of intersection, at which the cone touches the convex set at one point. Then separation theorem applied to the set and the cone yields a supporting hyperplane (separation theorem is, in a sense, the most fundamental result for geometry of Banach Spaces).
Is there exists a convex function in exterior domain and have zero limit at infinity?
Due to we can restrict the function to an entire line, say, $\{x_1=100,\ x_2\in\mathbb{R}\}$. Then the function f is still convex on this line. Hence f is always zero on this line. This makes f equivalent to zero (at infinity at least.)
Holomorphic function such that $Ref + Imf \geq 1$
Notice that $\frac1f:\Bbb C\to\Bbb C$ is entire and bounded, and therefore constant.
Linear Algebra Orthogonality Proof
Two vectors are orthogonal if and only if their inner product is $0.$ Now the inner product is bilinear, which means $\langle c_1v_1,c_2v_2 \rangle=c_1c_2\langle v_1,v_2\rangle=0$ as $\langle v_1,v_2\rangle=0.$ Therefore $c_1v_1$ and $c_2v_2$ are orthogonal.
Derivative of a complex function using difference quotient
\begin{align} f'(z) &=\lim\limits_{h\to0}\frac{(z+h){\bf Re}(z+h)-z{\bf Re}(z)}{h}\\ &=\dfrac12\lim_{h\to0}\frac{h(2z+h)+h\bar{z}+z\bar{h}+h\bar{h}}{h}\\ &=\dfrac12\lim_{h\to0}\left(2z+h+\bar{z}+\bar{h}+z\frac{\bar{h}}{h}\right)\\ &=z+\dfrac12\bar{z}+\dfrac12z\lim_{h\to0}\frac{\bar{h}}{h} \end{align} this limit exists only when $z=0$.
Limit of ratio of areas of triangles defined by tangents to a circle
This is plain geometry: join the point $\;C\;$ with the circle's center $\;O\;$ , say. Then, $\;M\;$ is on $\;OC\;$ (why?) and furthermore, $\;OC\perp AB\;,\;\;OC\perp DE\;$ ( why? Show that $\;AB\parallel DE\;$), which in fact means that in fact $\;\Delta ABC\sim\Delta DEC\;$ . Now, let us observe that since $\;|CM|=|CO|-R\;,\;\;R=$ the circle's radius, whereas if we denote by $\;P\;$ the intersection point of $\;AB\;$ with $\;OC\;$ , using Pythagoras theorem we get $$|OP|^2=R^2-\left(\frac12|AB|\right)^2=R^2-\frac14|AB|^2$$ so $$|CP|=|CO|-|OP|=|CO|-\frac12\sqrt{4R^2-|AB|^2}$$ and the similarity ratio is then: $$\frac{|CM|}{|CP|}=\frac{|CO|-R}{|CO|-\frac12\sqrt{4R^2-|AB|^2}}$$ Well, now just use that $\;\frac{\Delta ABC}{\Delta DEC}=\left(\frac{|CM|}{|CP|}\right)^2\;$ and observe that when $\;AB\to 0\;$, all the quantities involved are constant except, of course $\;|AB|\;$ , which also tends to zero.
For each positive integer $n$, let $x_n=1/(n+1)+1/(n+2)+\cdots+1/(2n)$. Prove that the sequence $(x_n)$ converges.
We write $$x_n=\sum_{k=1}^n\frac{1}{n+k}=\frac{1}{n}\sum_{k=1}^n\frac{1}{1+k/n}\to\int_0^1\frac{dx}{1+x}=\log 2$$
Closed sets in the product topology
In the product topology (where each $X_n$ is discrete, I assume, as this is common in such a setting) one commonly used base for the open subsets of $X$ consists of all sets of the form $O(w,m) = \{x \in X: x_i = w_i, i = m,\ldots, m + l(w)-1\}$, where $w$ is a finite word of length $l(w)$ from the alphabet $\{1,\ldots,k\}$, all infinite words that agree with a given finite word $w$ at some index $m$. It's clear that these sets are open as standard basic products $X_1 \times \ldots \times X_{m-1} \times \{w_1\} \times \{w_2\} \times \ldots \{w_{l(w)}\} \times X_{m+l(w)} \times \ldots$. (This uses that $w$ has finite length and all $X_m$ are discrete). The complement of $X_K$ can be written as $\bigcup \{O(w,m): m \in \mathbb{N}, w \in K\}$, all words that do agree with a word from $K$ somewhere. As a union of (basic) open sets, the complement is thus open and the set $X_K$ closed (for any sized $K$).
Prove equilateral triangle if vertices $a,b,c$ satisfy $|a|=|b|=|c|$ and $\sum_{cyc}\frac1{8-a/b-b/a-a/c-c/a}=0.3$
Apply the Jensen's inequality $$f(x_1) + f(x_2) + f(x_3) \ge 3f\left(\frac{x_1+x_2+x_3}3\right)$$ for the convex function $f(x)=\frac1x$ to get $$0.3=\sum_{{cyc}}\frac{1}{8-\frac{a}{b}-\frac{b}{a}-\frac{a}{c}-\frac{c}{a}} \ge \frac{1}{24-2\left(\frac{a}{b}+\frac{b}{a}+\frac bc+\frac cb+\frac{c}{a}+\frac{a}{c}\right)} $$ or, $$\frac{a}{b}+\frac{b}{a}+\frac bc+\frac cb+\frac{c}{a}+\frac{a}{c}\le -3\tag 1$$ Given that $|a|=|b|=|c|$, assume, without loss of generality $\alpha, \ \beta, \ \gamma >0$, $$\frac ba =e^{i\alpha},\>\>\>\>\>\frac cb =e^{i\beta},\>\>\>\>\>\frac ac=e^{i\gamma} \>\>\>\>\>\alpha + \beta + \gamma = 2\pi$$ Then, the inequality (1) becomes, $$\cos\alpha + \cos\beta + \cos \gamma \le -\frac32\tag 2$$ Note that, $$\cos\alpha + \cos\beta + \cos \gamma = 2\cos\frac{\alpha+\beta}2\cos\frac{\alpha-\beta}2 + 2\cos^2\frac\gamma2-1$$ $$= 2\cos^2\frac\gamma2 - 2\cos\frac{\gamma}2\cos\frac{\alpha-\beta}2 -1 \ge 2\cos^2\frac\gamma2 - 2\cos\frac{\gamma}2 -1 $$ $$= 2\left(\cos\frac\gamma2 - \frac12\right)^2 -\frac32 \ge-\frac32\tag 3$$ From (2) and (3), we have $$\cos\alpha + \cos\beta + \cos \gamma=-\frac32$$ and, according to (3), the equality occurs at $\cos\frac\gamma2 - \frac12=0$ and $\cos\frac{\alpha-\beta}2 =1$, which leads to $\alpha=\beta =\gamma = 120^\circ$. Thus, the neighboring vertexes of equal module all have the same argument angle between them, hence, forming an equilateral triangle.
Is there a proper way to prove that $f:[a,b] \to[a,b]$
The solution very much depends on the function itself, but you can make some general points Start by finding the minimum and maximum of $f$ in the interval, if the minimum is $a$ and the maximum is $b$ then (assuming $f$ is continuous) the range is the same. If $f$ is not continuous then the range might be $[a,b]$ and it might be a subset of $[a,b]$. If the minimum is larger than $a$ and/or the maximum is smaller than $b$, then the range will always be a proper subset of $[a,b]$. If and only if the maximum of $f$ is larger than $b$ and/or the minimum is smaller than $a$ then the range will contain elements outside of $[a,b]$. It's worth noting that if the range is a subset of $[a,b]$, then depending on your definition of range, it might be okay to expand the range to be exactly $[a,b]$, note that the function wouldn't be surjective if you do this. As for your example $$f(x)=e^{-x},\qquad[\ln(1.1),\ln(3)]$$ Let's find the minimum: the function $f$ is decreasing so the minimum is $e^{-\ln(3)}=\frac13$ Since $\dfrac13<1=\ln(1)<\ln(1.1)$ the function does not have the same range and domain.
Proof of Implicit Function Theorem: special case
You have: $0=f(x+h,g(x+h))-f(x,g(x))$ by the multivariable Lagrange theorem for all small $h$ we have $\theta\in (0,1)$ such that: $$f(x+h,g(x+h))-f(x,g(x))=f_x(x + \theta h, g(x) + ΞΈ(g(x + h) βˆ’ g(x)))h+ f_y(x + \theta h, g(x) +\theta (g(x + h) βˆ’ g(x)))(g(x + h) βˆ’ g(x))$$ then by re-arranging we have: $$|g(x+h)-g(x)|=\frac{|f_x(x + \theta h, g(x) + ΞΈ(g(x + h) βˆ’ g(x)))|}{|f_y(x + \theta h, g(x) +\theta (g(x + h) βˆ’ g(x)))|} |h|$$ this gives you continuity since the RHS is bounded by $M|h|$. Now that you know $g$ is continuous. Use the above equation with absolute value to get that the derivative at $x$ is $\frac{-f_x(x,g(x))}{f_y(x,g(x))}$. If you need more details I guess I can add them.
How to prove $E[g(x)] = \int_0^\infty g'(x)S(x) dx$
I think what you are trying to prove is not exactly correct. I will work through the problem and show the correct answer at the end. You need to split the integral into two, one for the region $x\ge 0$ and one for $x\le 0$. In the positive region, instead of writing $f(x)=F'(x)$ and integrating by parts, you need to use $f=(F(x)-1)'$. This is the only choice which ensures that the resulting integral actually exists; note that $F(x)-1$ goes to zero as $x\to+\infty$, but $F(x)$ does not. \begin{align} \int_0^\infty g(x)f(x)\,dx &=\int_0^\infty g(x)[F(x)-1]'\,dx \\&=g(x)[F(x)-1]\Big|^\infty_0-\int_0^\infty g'(s)[F(x)-1]\,dx \\&=\underbrace{\Big(\lim_{x\to\infty}g(x)[F(x)-1]\Big)}_{=0}-g(0)[F(0)-1]+\int_0^\infty g'(x)S(x)\,dx \end{align} It takes some doing, but you can show that that limit actually is zero, as long as $E[g(X)]$ is finite. See Is it true that $\lim\limits_{x\to\infty}{x·P[X>x]}=0$? for some inspiration. Edit: Actually, I am not quite sure this is true without some additional assumptions on $g.$ It is true as long as $g$ is either bounded or monotonic. For the negative region, you do want to use $F(x)$ as the antiderivate of $f(x)$, because $\lim_{x\to-\infty}F(x)=0$. \begin{align} \int_{-\infty}^0 g(x)f(x)\,dx &=\int_{-\infty}^0 g(x)F(x)'\,dx \\&=g(x)F(x)\Big|^0_{-\infty}-\int_{-\infty}^0 g'(s)F(x)\,dx \\&=g(0)F(0)-\underbrace{\Big(\lim_{x\to-\infty}g(x)F(x)\Big)}_{=0}-\int_{-\infty}^0 g'(x)F(x)\,dx \end{align} Putting this all together, we get $$ \bbox[7px,border:2px solid red]{\int_{-\infty}^\infty g(x)f(x)\,dx=g(0)+\int_0^\infty g'(x)S(x)\,dx-\int_{-\infty}^0g'(x)F(x)\,dx} $$ This is correct for any function $g$ such that $E[g(X)]$ is finite. Edit: Well, as long as $g$ is either bounded or monotonic. If we make further assumptions about $g$, we can write this more nicely. Assuming $g(-\infty):=\lim_{x\to-\infty}g(x)$ exists, you would have $$ g(0)=g(-\infty)+\int_{-\infty}^0 g'(x)\,dx $$ so $$ \bbox[7px,border:2px solid green]{\int_{-\infty}^\infty g(x)f(x)\,dx=g(-\infty)+\int_{-\infty}^\infty g'(x)S(x)\,dx} $$ You cannot in general avoid the presence of some constant $g(a)$. This is because the $\int g'(x)S(x)\,dx$ part of the formula does not change when $g$ is shifted by a constant (this does not affect $g'$), but $E[g(x)]$ does change when $g$ is shifted by a constant.
Does there exist any continuous bijection between [0,1] and (0,1) and between [0,1] and IR?
The image of a continuous map of a compact metric space is compact. In particular, the image of a continuous map of a compact metric space into $\mathbb R$ is closed and bounded. Therefore, don't expect to find a continuous bijection between $[0,1], $ which is compact, and $(0,1)$, which is open. Other explanations can be found here.
How is $ \ \frac{1}{1+\frac{1}{x^2}}=\ \frac{x^2}{1+x^2} = \ 1 - \frac{x^2}{1+x^2}$?
$$\frac{1}{1+\frac{1}{x^2}}=\frac{x^2}{x^2}\times\frac{1}{1+\frac{1}{x^2}}=\frac{x^2}{x^2(1+\frac{1}{x^2})}=\frac{x^2}{x^2+1}=\frac{x^2+1-1}{x^2+1}=1-\frac{1}{x^2+1}$$ So, probably a typo somewhere.
Show that $f(x,y) = (x^{\alpha} + y^{\alpha})^{1/\alpha}$ is concave for $0 < \alpha < 1$ and for $\alpha < 0$
Actually it is simple to evaluate the Hessian by hand which is given by $$H(x, y) = - \frac{(1-\alpha)x^\alpha y^\alpha(x^\alpha + y^\alpha)^{1/\alpha}}{x^2y^2(x^\alpha + y^\alpha)^2} \left( \begin{array}{cc} y^2 &amp; -xy \\ -xy &amp; x^2 \\ \end{array} \right). $$ Since $\left( \begin{array}{cc} y^2 &amp; -xy \\ -xy &amp; x^2 \\ \end{array} \right) = [y, -x]^T[y, -x]$ is positive semidefinite, $H(x, y)$ is negative semidefinite. We are done.
Given a randomly generated binary matrix with fixed row and column weights, what is the probability that two columns have ones at the same row?
The required probability is $\dfrac{b/a-1}{b-1}$. There are many ways to see this, here is a particularly long winded one. Call the two columns $S$ and $T$. Call the row of the $1$ in column $S$ as $R$. The number of matrices that satisfy the given conditions that have a $1$ in the position $(R,S)$ (intersection of $R$ and $S$) is $${b-1\choose b/a-1}{b-b/a\choose b/a}\cdots{b-(a-1)b/a\choose b/a} ------ (1)$$ because the # of ways to choose 1's in the row $R$ is the first term and the ways of choosing $1$'s in the rest of the rows are the subsequent terms (order of rows doesn't matter) Similarly the number of matrices that satisfy the given conditions that have a $1$ in the position $(R,S)$ and a $1$ in position $(R,T)$ is $${b-2\choose b/a-2}{b-b/a\choose b/a}\cdots{b-(a-1)b/a\choose b/a}------ (2)$$ Divide $(2)$ by $(1)$ to get the required probability $=\dfrac{b/a-1}{b-1}$ To see that this is less than or equal to $1/a$, suppose $\dfrac{b/a-1}{b-1} \geq \dfrac{1}{a}$. Then $\dfrac{b-a}{b-1} \geq 1$ which is only possible if $a = 1$ in which case equality holds. If $a&gt;1$ then $\dfrac{b/a-1}{b-1} &lt; \dfrac{1}{a}$.
Real part of $\sqrt{ai-1}$
Convert $ai-1$ into polar form. Let $ai-1 = r(\cos\theta + i\sin\theta)$. Then $$\begin{align*} a &amp;= r\sin\theta\\-1 &amp;= r\cos\theta\\ r^2 &amp;= (r\sin\theta)^2 + (r\cos\theta)^2\\ &amp;= a^2 + 1\\ r &amp;= \sqrt{a^2 + 1}\\ \tan \theta &amp;= \frac{r\sin\theta}{r\cos\theta}\\ &amp;= -a\\ \theta &amp;= \pi - \tan^{-1} a \end{align*}$$ Let $r(\cos\theta + i\sin\theta) = [p(\cos\phi + i\sin\phi)]^2 = p^2(\cos2\phi + i\sin2\phi)$ for $p\ge 0$. Then $p^2 = r$ and $p = \sqrt r$. Also, there are two arguments $\phi\in[0,2\pi)$ that will satisfy $2\phi\equiv \theta \pmod {2\pi}$: $$\phi_1 = \frac\theta 2,\quad \phi_2 = \frac{\theta + 2\pi}2$$ So one of the roots is $$\begin{align*} p(\cos\phi_1 + i\sin\phi_1) &amp;= \sqrt[4]{a^2+1}\left[\cos\frac{\pi-\tan^{-1}a}2 + i\sin\frac{\pi-\tan^{-1}a}2\right]\\ &amp;= \sqrt[4]{a^2+1}\left[\sin\frac{\tan^{-1}a}2 + i\cos\frac{\tan^{-1}a}2\right]\\ \end{align*}$$ and the other root is $$\begin{align*} p(\cos\phi_2 + i\sin\phi_2) &amp;= \sqrt[4]{a^2+1}\left[\cos\frac{3\pi-\tan^{-1}a}2 + i\sin\frac{3\pi-\tan^{-1}a}2\right]\\ &amp;= \sqrt[4]{a^2+1}\left[-\sin\frac{\tan^{-1}a}2 - i\cos\frac{\tan^{-1}a}2\right]\\ \end{align*}$$ Simplify the nested inverse tangent in (co)sine using half-angle formulae. $$\begin{align*} \sin\frac{\tan^{-1} a}2 &amp;= \operatorname{sgn} a \sqrt{\frac{1-\cos\tan^{-1}a}2}\\ &amp;= \operatorname{sgn} a\sqrt{\frac{1-\frac1{\sqrt{a^2+1}}}2}\\ &amp;= \operatorname{sgn} a\sqrt{\frac{\sqrt{a^2+1} - 1}{2\sqrt{a^2+1}}}\\ \cos\frac{\tan^{-1} a}2 &amp;= \sqrt{\frac{1+\cos\tan^{-1}a}2}\\ &amp;= \sqrt{\frac{1+\frac1{\sqrt{a^2+1}}}2}\\ &amp;= \sqrt{\frac{\sqrt{a^2+1} + 1}{2\sqrt{a^2+1}}} \end{align*}$$ For this question, $\operatorname{sgn} a$ is always $1$.
Weird Identities with Scalar Product & Transpose: $\vec{a}\cdot\vec{b} = \vec{b}^T \cdot {a}^T$, $\vec{a}^T \cdot \vec{b} = \vec{b}^T \cdot \vec{a} $?
A vector should always be a column vector. If you want to talk about a "row vector", you should write it as the transpose of some vector, or as a matrix with just one row. I will add that, technically, the scalar product can not be written as a matrix product the way you are doing it. If $u,v$ are vectors, then the matrix product $u^T v$ is a $1\times 1$-matrix, whereas the scalar product $u\cdot v$ is a scalar. It is common to ignore this and consider them the same, but one should sometimes be careful when using this, for example when one wants to multiply the scalar product with a matrix. Added: You cannot say that $a$ is a vector and then write $a=(4,5,6)$ as a row matrix. You can however say that $a$ is a $1\times 3$-matrix $(4,5,6)$, or that $a$ is a vector and $a^T=(4,5,6)$ (and in the latter case, $a$ is a column vector). Let us suppose that you defined the matrices $a=(4,5,6)$ and $b=\begin{pmatrix}1\\2\\3\end{pmatrix}$. Now, the matrix product $a\cdot b$ is equal to the $1\times 1$-matrix $[32]$, not the number 32, which is the scalar product of the vectors $\begin{pmatrix}4\\5\\6\end{pmatrix}$ and $\begin{pmatrix}1\\2\\3\end{pmatrix}$. Likewise, the matrix product $b^T\cdot a^T$ is equal to the $1\times 1$-matrix $[32]$. Then we can conclude that $a\cdot b=b^T\cdot a^T$ (note that you made a typo in your post and wrote $a^T\cdot b^T$ for the right-hand-side). This is consistent with Wikipedia, which states that $(a\cdot b)^T=b^T\cdot a^T$. However, your typo is also correct in this particular case, namely $(a\cdot b)^T=a^T\cdot b^T$, but this is only because both matrices are $1\times 1$-matrices, so they are equal to their own transposes. Added after the edit: You get the identity $a^T\cdot b=b^T\cdot a$. This equation is not true for general matrices $a,b$, so you will not find it on Wikipedia. What is generally true, however, is the identity $(a^T\cdot b)^T=b^T\cdot a$. The reason your identity is true is because both matrices are $1\times 1$-matrices, and the transpose of a $1\times 1$-matrix does not change the matrix.
15 coin toss ,coins which shows up heads is then tossed again, What is the probability of observing 5 heads in the second round of tosses?
You set up your calculation well, but it is a little lengthy. There is a quicker way. The coins that did not land heads in the first round will feel bad about being left out of the second round. So let's change the game a little, and toss the first coin twice, and the second, and the third, and so on. The probability a coin lands Head then Head is $(0.4)^2$, so the probability this happens $5$ times is $\binom{15}{5}((0.4)^2)^5(1-(0.4)^2)^{10}$. Let random variable $X$ be the number of heads in the second round. You can find the distribution of $X$, that is, $\Pr(X=k)$, in the same way.
Probability definition question
It is not defined that way unless there is no bias in the procedure which realises the outcomes. That is: &nbsp; We only measure probability by comparing counts of outcomes when we know&dagger; that all individual outcomes are equally likely. &nbsp; They also have to be mutually exclusive and jointly exhaustive. So, with the coin toss, it is not just that we have two possible outcomes, it is that there are: only two unbiased outcomes possible&Dagger;. &dagger;: or at least have a justified reason to believe. &Dagger;: according to the model of how an ideal coin toss works, anyway.
The definition of Riemann to complex function
The definition you've quoted is a historical one from Riemann and uses the term "function" (and "$dz$") in a nonstandard way. The next sentence in your book gives the equivalent modern definition of differentiability of a complex function.
Heat semigroup representation
If you have a bounded region with a smooth boundary, and Dirichlet conditions, then $\Delta$ can be represented using an eigenfunction expansion, and the heat semigroup can be expressed in terms of the eigenfunctions of the Laplacian as well. That is, $$ -\Delta \varphi_n =\lambda_n \varphi_n, \;\;\; \|\varphi_n\|_{L^2(\Omega)}=1,\\ \lambda_1 \le \lambda_2 \le \lambda_3 \le \cdots, $$ and the eigenfunctions can be assumed to be mutually orthogonal. And the Heat semigroup $H(t)f=e^{t\Delta}f$, which is the solution of $\psi_{t}=\Delta \psi$ can be expressed in terms of the eigenfunctions of $-\Delta$, $$ H(t)f = \sum_{n=1}^{\infty}e^{-\sqrt{\lambda_n}t}\langle f,\varphi_n\rangle\varphi_n. $$ There would not generally be any simplification of this form. This is a $C^0$ semigroup on $L^2(\Omega)$.
Solve for $x$, $2\cdot 3^{x+1}-3^{-x} \le 5$
It's a standard trick when you see $a^x$ and $a^{-x}$ to multiply through by $a^x$ and see if a quadratic equation appears. In this case, pull one factor of 3 out of the first term, bring the $5$ over to left side and multiply by $3^x$ to get $$6\cdot 3^{2x}-5\cdot 3^x -1 \leq 0.$$ This factors: $$(6\cdot 3^x+1)(3^x-1) \leq 0.$$ The first factor is always positive, so we must have $3^x-1\leq 0.$ So $3^x\leq 1$, and hence $x\leq 0$.
Space of bounded continuous functions is complete
Let $(B(X), \|\cdot\|_\infty)$ be the space of bounded real-valued functions with the sup norm. This space is complete. Proof: We claim that if $f_n$ is a Cauchy sequence in $\|\cdot\|_\infty$ then its pointwise limit is its limit and in $B(X)$, i.e. it's a real-valued bounded function: Since for fixed $x$, $f_n(x)$ is a Cauchy sequence in $\mathbb R$ and since $\mathbb R$ is complete its limit is in $\mathbb R$ and hence the pointwise limit $f(x) = \lim_{n \to \infty } f_n(x)$ is a real-valued function. It is also bounded: Let $N$ be such that for $n,m \geq N$ we have $\|f_n - f_m\|_\infty &lt; \frac{1}{2}$. Then for all $x$ $$ |f(x)| \leq |f(x) - f_N(x)| + |f_N(x)| \leq \|f - f_N \|_{\infty} + \|f_N \|_{\infty}$$ where $\|f - f_N \|_{\infty} \leq \frac12$ since for $n \geq N$, $ |f_n(x) - f_N(x)| &lt; \frac12$ for all $x$ and hence $|f(x) - f_N(x)| = |\lim_{n \to \infty} f_n(x) - f_N(x)| = \lim_{n \to \infty} |f_n(x) - f_N(x)| \color{\red}{\leq} \frac12$ (not $&lt;$!) for all $x$ and hence $\sup_x |f(x) - f_N(x)| = \|f-f_N\|_\infty \leq \frac12$. To finish the proof we need to show $f_n$ converges in norm, i.e. $\|f_N - f\|_\infty \xrightarrow{N \to \infty} 0$: Let $\varepsilon &gt; 0$. Let $N$ be such that for $n,m \geq N$ we have $\|f_n-f_m\|_\infty &lt; \varepsilon$. Then for all $n \geq N$ $$ |f(x) - f_n(x)| = \lim_{m \to \infty} |f_m(x) - f_n(x)| \leq \varepsilon $$ for all $x$ and hence $\|f- f_n\|_\infty \leq \varepsilon$.
Matrix inequalities - Positive matrices
An easy counterexample is $A = -I$, $B = -2I$. EDIT: What is true is this. Suppose $A &gt; 0$, $B$ is hermitian and $AB &gt; I$. Then $A^{1/2} B A^{1/2} &gt; I$, where $A^{1/2}$ is the unique positive definite square root of $A$, and so $B &gt; A^{-1}$. In fact, since $AB$ is typically not hermitian, the hypothesis can be weakened to: $A &gt; 0$, $B$ is hermitian, and all eigenvalues of $AB$ are greater than $1$.
Show differential equation solution is arc of great circle
Let $u=\cot \theta$, then $1+u^2=\csc^2 \theta$ and $du=-\csc^2 \theta \, d\theta$. \begin{align} \frac{d\phi}{d\theta} &amp;= \frac{d\theta}{\sin \theta \sqrt{\sin^2 \theta-C^2}} \\ &amp;= \frac{C\csc^2 \theta}{\sqrt{1-C^2\csc^2 \theta}} \\ d\phi &amp;= -\frac{C\,du}{\sqrt{1-C^2(1+u^2)}} \\ &amp;= -\frac{C\,du}{\sqrt{(1-C^2)-C^2u^2}} \\ &amp;= -\frac{du}{\sqrt{\tan \alpha^2-u^2}} \tag{$C=\cos \alpha$} \\ \phi &amp;=\cos^{-1} \left( \frac{u}{\tan \alpha} \right)+\beta \\ \cos (\phi-\beta) &amp;= \frac{\cot \theta}{\tan \alpha} \\ \cot \theta &amp;= \tan \alpha \cos (\phi-\beta) \\ \end{align} Rearrange, $$(\sin \theta \cos \phi)(\sin \alpha \cos \beta)+ (\sin \theta \sin \phi)(\sin \alpha \sin \beta)= (\cos \theta)(\cos \alpha)$$ which lies on the plane $$x\sin \alpha \cos \beta+y\sin \alpha \sin \beta-z\cos \alpha=0$$
Sum and difference coprimes
Suppose $\gcd(a,b)=1$. Let $g=\gcd(a+b,a-b)$. From $g\mid(a+b)+(a-b)=2a$ and $g\mid(a+b)-(a-b)=2b$, it follows that $g\mid\gcd(2a,2b)$. But as $\gcd(a,b)=1$, we have $\gcd(2a,2b)=2$. So $g\mid2$. That means it suffices to choose $a$ and $b$ such that $a+b$ and $a-b$ are both odd. That is, $a$ is even or $b$ is even. So $a$ and $b$ will satisfy $g=1$, if and only if $a$ and $b$ are not both odd.
What is the difference between Taylor series and Laurent series?
Laurent series allows for terms with negative power. Intuitively, this allows for singularities to occur.
Is there a sense in which $\sin^2(nx)$ converges to $1/2$?
It does hold for example in the sense of periodic distributions. If $f \in L^1([0,2\pi])$ then the Riemann&ndash;Lebesgue lemma states that $$ \lim_{n \to \infty} \int_0^{2\pi} \sin nx\, f(x)\, dx = 0\,.$$ And thus $\sin nx \to 0$ and the same holds for $\cos nx$. For $\sin^2 nx$ use simply that $$ \sin^2 nx = \frac {1 - \cos 2nx}2\,,$$ to obtain $\sin^2 nx \to \frac 12$. The important point, as mentioned in the comments, is that convergence comes in many different forms and shapes. As far as I understand Hairer's work on regularity structures (very little), it is about introducing new notions of convergence to make sense of objects that previously seemed or where undefined. Hence, if you want to understand his work, I would study what happened in the paper before that. Somewhere there should be an explanation or definition, how convergence is to be understood.
Spectrum of the ring $k[T]/(T^2)$
Why not making your life easier? There is no need to calculate with polynomials. We have homeomorphisms $\mathrm{Spec}(k[T]/(T^2)) \cong V(T^2) = V(T) \cong \mathrm{Spec}(k[T]/(T)) \cong \mathrm{Spec}(k) = \{\eta\}$ It maps $\eta$ to the kernel of $k[T]/(T^2) \to k[T]/(T) \cong k$, which is $(T)$. In general, if $A$ is a commutative ring with a maximal ideal $\mathfrak{m}$ and $n \geq 1$, then $A/\mathfrak{m}^n$ has a single prime ideal, namely $\mathfrak{m}/\mathfrak{m}^n$. Direct proof: The prime ideals of $A/\mathfrak{m}^n$ correspond to the prime ideals of $A$ containing $\mathfrak{m}^n$. But these have to contain $\mathfrak{m}$, i.e. have to be $=\mathfrak{m}$.
What is uniform convergence for one function and how is that equivalent to continuity?
Theorem: the uniform limit of a sequence of continuous functions on an interval is a continuous function on that interval. If you examine the teacher's proof that $\sum_{n=1}^\infty \frac{1}{n^2 - x^2}$ converges (presumably on some interval that doesn't contain any positive integer $n$), it may have in it some estimate that is true for all $x$ in the interval. That will allow you to conclude that the convergence is uniform.
Given $A=\langle yz^2,-3xz^2, 2xyz\rangle$ and $\phi = xyz$, evaluate $A .\nabla $ and $(A \times \nabla)\phi$
Things become clearer when we focuse on the nature of the object. Configuring $$ f_1: (x,y,z)\to yz^2 $$ $$ f_2: (x,y,z)\to -3xz^2 $$ $$ f_3: (x,y,z)\to 2xyz $$ $$ \phi : (x,y,z) \to xyz$$ So A is a function $$A\in{\mathbb{R}^3}^{\mathbb{R}^3}$$ $$ A=(f_1,f_2,f_3) $$ Thus the operators are well defined $$ A\cdot \nabla $$ $$ A\times \nabla $$ is a function of function i.e. and generally speaking an operator. Applying to $\phi$ $$ A\cdot \nabla(\phi)=f_1 \partial_1\phi + f_2 \partial_2\phi + f_3 \partial_3\phi $$ $$ (A \times \nabla)(\phi)=&lt;(f_2 \partial_3 - f_3 \partial_2)\phi \ ; \ (f_3 \partial_1 - f_1 \partial_3)\phi \ ; \ (f_2 \partial_1 - f_1 \partial_2)\phi &gt; $$ There you go ! Comment As you see that , for instance $$A \times \nabla $$ is an operator , so it is an object, a process, an operator ready to be applied to a function.
Expected determinant of a random symmetric matrix
Assuming the $3$ entries to be independent. \begin{align}\mathbb{E}[ad-b^2]&amp;=\mathbb{E}[a]\mathbb{E}[d]-\mathbb{E}[b^2] \\ &amp;=-(\mathbb E[b^2]-\mathbb{E}[b]^2)\\ &amp;=-Var(b)\end{align}
An identity for $\binom{-1/2}{n}$
More than a hint: Recall that the falling factorial is defined as $x^\underline{k}=\frac{x!}{(x-k)!}=x(x-1)\cdots (x-k+1),$ and the rising factorial is defined as $x^\overline{k}=\frac{(x+k-1)!}{(x-1)!}=x(x+1)\cdots (x+k-1).$This is the binomial theorem $$(x+y)^n=\sum _{k=0}^n\binom{n}{k}x^ky^{n-k}.$$ The beautiful part is that this keeps happening for $$(x+y)^{\underline{n}}=\sum _{k=0}^n\binom{n}{k}x^{\underline{k}}y^{\underline{n-k}},$$ notice that if you take $x=n,y=-1/2-n,$ then $$(-1/2)^{\underline{n}}=\sum _{p=0}^n\binom{n}{p}n^{\underline{n-p}}(-1/2-n)^{\underline{p}}.$$ Then you have to check that $(-x)^{\underline{k}}=(-1)^k(x)^{\overline{k}}.$
proof that $Y=ΞΌ+ΟƒX$ if X∼N(0,1),
Hint $$\mathbb P\{Y\leq y\}=\mathbb P\left\{X\leq \frac{y-\mu}{\sigma }\right\}.$$
How to evaluate this binomial sum?
\begin{align*} \sum_{i=1}^{n-k+1} i \binom{n-i}{k-1} &amp;= \sum_{i=0}^{n-k} (i+1) \binom{n-i-1}{k-1} \\ &amp;= \sum_{i=0}^{n-k} \big((n+1) - (n-i)\big) \binom{n-i-1}{k-1} \\ &amp;= \sum_{i=0}^{n-k} (n+1) \binom{n-i-1}{k-1} - \sum_{i=0}^{n-k} (n-i) \binom{n-i-1}{k-1} \\ &amp;= (n+1) \sum_{i=0}^{n-k}\binom{n-i-1}{k-1} - k \sum_{i=0}^{n-k} \binom{n-i}{k} \\ &amp;= (n+1) \sum_{m=k-1}^{n-1}\binom{m}{k-1} - k \sum_{m=k}^{n} \binom{m}{k} \\ &amp;= (n+1) \binom{n}{k} - k \binom{n+1}{k+1} \\ &amp;= \binom{n+1}{k+1} \end{align*} It uses $$ r \binom{r-1}{k-1} = k \binom{r}{k} $$ and $$ \sum_{0 \le k \le n} \binom{k}{m} = \binom{n+1}{m+1}.$$
Two persons should not sit next to each other in a row of $n$.
Break into cases: First person of our pair of troublemakers sits at a corner pick which corner ($2$ options) pick where the other troublemaker sits ($n-2$ options) pick how the other people sit ($(n-2)!$ options) First person of our pair of troublemakers doesn't sit at a corner pick which center seat ($n-2$ options) pick where the other troublemaker sits ($n-3$ options) pick how the other people sit ($(n-2)!$ options) This gives a total of $$\color{red}{2}(n-2)(n-2)! + \color{red}{(n-2)}(n-3)(n-2)!$$ You mistakenly used $n$ for the above red numbers instead of $2$ and $n-2$ respectively. This of course is equal to $n!-2(n-2)!$, as expected, as can be shown through algebraic manipulation.
Residue theorem application [demonstration]
I think that $F$ is entire (F Analytic over $\mathbb{C})$. Otherwise it's not true. If $F$ is entire : f has at most one singularity at 1. The exercice says that 1 is a pole. So f is a meromrphic function having a single pole at 1. let $ \frac{a{-n}}{(z-1)^n}+\cdots\frac{a_{-1}}{z-1}+a_0+a_1(z-1) \cdots $ be the Laurent series of $f$ at 1. and $P(z)=a_{-1}z+\cdots a_{-n}z^n$. $g(z)=f(z)-P(\frac{1}{z-1})$ is holomorphic over $\mathbb{C}$. On the other hand one check's that $g$ is bounded. Therefor by Liouvilles theorem $g(z)=a$ where a is a constant. Finaly we get $F(z)=P(z)+a$ (a=a_0).
Find$f(6)$ where $f(4)=\frac{f(8)}{2}=\frac{1}{4}$, $\int_4^8 \frac{(f'(x))^2}{f(x)^4}dx=1$
Hints: verify that equality holds in C-S inequality $\int_4^{8} \frac {f'(x)} {f(x)^{2}}dx \leq \sqrt {\int_4^{8} \frac {f'(x)^{2}} {f(x)^{4}}dx} \sqrt {8-4}$. [The intergal on the left is $-\frac 1 {f(x)}|_4^{8}$ ]. This implies that ${f'(x)}/ {f(x)^{2}}$ is a constant $c$. Now solve this simple DE to find $f$.
Complete Metric Space Proof
Someone please tell me if this is correct: I proceed as in the standard proof that $L^2$ is complete. Choose a subsequence $\{n_k\}_{k \in \mathbb{N}}$ such that $Y^k \equiv X^{n_k}$ satisfies $d(Y^k, Y^{k+1}) &lt; \frac{1}{2^k}$ Further define $H_n = \sum_{k=1}^{n-1} \sup_{t \leq T} |Y^{k+1}_t - Y^k_t|$. We see that $H_\infty \equiv \sup_n H_n &lt; \infty$ almost surely since, by the monotone convergence theorem and Minkowski's inequality: $$\|H_\infty\|_{L^2}^2 = \sup_n \|H_n\|_{L^2}^2 \leq \sup_n \sum_{k=1}^{n-1}d(Y^k, Y^{k+1}) = \sup_n \sum_{k=1}^{n-1} \frac{1}{2^k} = \sum_{k=1}^{\infty} \frac{1}{2^k} = 1$$ Thus for $m,n$ large and $\epsilon &gt; 0$, $$\textbf{(1)} \quad \quad \sup_{t \leq T} |Y^{m}_t - Y^n_t| \leq H_m-H_{n-1} &lt; \epsilon $$ Thus, $Y^n$ is a Cauchy sequence of continuous functions on $[0,T]$ and this is a complete metric space w.r.t. the supremum metric, so that there exists a continuous limit $Y \equiv (Y_t)_{t \leq T}$ Now $\sup_{t \leq T} |Y^{n}_t - Y_t| \rightarrow 0$ and taking $m \rightarrow \infty$ in the first inequality in the first inequality in (1) has $\sup_{t \leq T} |Y^{n}_t - Y_t| \leq H_\infty \in L^2$ so that we may use the dominated convergence theorem to say that $Y^n \rightarrow Y$ in the metric space $(\mathcal{R}_c^2,d)$. The uniqueness of limits for Cauchy sequences concludes the result. $\quad \quad \square$
inequality in a differential equation
The basic argument would go like this. Go ahead and let $f(t) = |u(t)|^2$, so that equation (1) says $f'(t) + f(t) \leq 1$. We can rewrite this as $$\frac{f'(t)}{1-f(t)}\leq 1.$$ Let $g(t) = \log(1 - f(t))$. Then this inequality is exactly that $$-g'(t)\leq 1.$$ It follows that $$g(t) = g(0) + \int_0^t g'(s)\,ds\geq g(0) - t.$$ Plugging in $\log(1 - f(t))$ for $g$, we see that $$\log(1-f(t)) \geq \log(1 - f(0)) - t.$$ Exponentiating both sides gives $$1 - f(t) \geq e^{-t}(1 - f(0)),$$ which is exactly the inequality you are looking for. Edit: In the case when $f(t)&gt;1$, the argument above doesn't apply because $g(t)$ is not defined. Instead, take $g(t) = \log(f(t) -1)$, so that $g'(t)\leq -1$. It follows that $g(t) \leq g(0) - t$ (at least for small enough $t$ that $g$ remains defined), which upon exponentiating again gives the desired inequality.
what is the complex number that satisfy both loci $\lvert z\rvert=4$ and $\lvert z+2\rvert=\lvert z-4\rvert$?
The equation: $|z+2|=|z-4|$ is the locus of perpendicular bisector of line segment joining $(-2,0)$ and $(4,0)$. So it is straight line whose equation is: $x=(4-2)/2=1$. The equation $|z|=4$ represents a circle of radius 4 centered at $(0,0)$. Put $z=x+iy=1+iy$ in the circle’s equation to get point of intersection.