title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Expected difference from mean: is it always zero?
If $\mu = \mathbb{E}[X]$, then $$ \mathbb{E}[X - \mu] = \mathbb{E}[X] - \mathbb{E}[\mu] = \mathbb{E}[X] - \mu = \mu - \mu = 0 $$ regardless of the skewness of $X$
Calculate basis of $U^\perp$ with $U=\langle \left(\begin{array}{c} 1\\ 1\\ 1\\ \end{array}\right) \rangle$
Here's a result you can rely on to verify whether or not your candidate basis is valid. Proposition. Suppose that $\{v_1,\dotsc, v_d\}$ is a basis of a vector subspace $V\subset\Bbb R^n$ and let $\{w_1,\dotsc, w_d\}\subset\Bbb R^n$. Let $A$ and $B$ be the matrices whose columns are $\{v_1,\dotsc,v_d\}$ and $\{w_1,\dotsc,w_d\}$ respectively, so \begin{align*} A &= \begin{bmatrix}v_1 & \dotsb & v_d\end{bmatrix} & B &= \begin{bmatrix}w_1 & \dotsb & w_d\end{bmatrix} \end{align*} Then $\{w_1,\dotsc,w_d\}$ is a basis of $V$ if and only if there is an invertible matrix $P$ satisfying $AP=B$. In your situation, the matrices $A$ and $B$ are given by \begin{align*} A &= \left[\begin{array}{rr} -1 & -1 \\ 1 & 0 \\ 0 & 1 \end{array}\right] & B &= \left[\begin{array}{rr} 1 & 0 \\ -1 & -1 \\ 0 & 1 \end{array}\right] \end{align*} Is there an invertible $P$ satisfying $AP=B$? Hint. Using the notation from the proposition, we have $w_1=-v_1$ and $w_2=-v_1+v_2$. How might this help us construct $P$?
Do dual Banach spaces admitting $c_0$ as a quotient contain complemented copies of $\ell_1$?
Since every Banach space with a subspace isomorphic to $\ell_1$ has nonseparable dual, to construct a counterexample to the question it suffices to find a Banach space $X$ such that $X^{\ast\ast}$ is norm separable and $X^\ast$ has a quotient isomorphic to $c_0$. To this end we use the James-Lindenstrauss construction, which yields (amongst other things) the following result: Let $Y$ be a separable Banach space. Then there exists a separable Banach space $Z$ such that $Z^{\ast\ast}/Z$ is isomorphic to $Y$. This result was proved by Joram Lindenstrauss in his paper On James's paper "Separable conjugate spaces", Israel J. Math 9(3) (1971), pp.279-284. (Robert James earlier obtained this result for the case where $Y$ is finite dimensional. To give the claimed counterexample, we also mention the notion of a three-space property for Banach spaces. In particular, a property of a Banach space is a three-space property if whenever $E$ is a Banach space, $F \subseteq E$ is a closed linear subspace and two of the spaces $E$, $F$ and $E/F$ have the property, then all three of the spaces $E$, $F$ and $E/F$ necessarily have the property; a classical example of a three-space property of Banach spaces is reflexivity. We shall call upon the fact that the following properties are both three-space properties: Norm separability. Norm separability of the dual. Let $Z$ be a separable Banach space such that $Z^{\ast\ast}/Z$ is isomorphic to $c_0$. We claim that $Z^{\ast\ast\ast}$ is norm separable; once this is established, taking $X=Z^{\ast}$ gives the desired counterexample. To this end, let us first notice that since norm separability is a three-space property, $Z^{\ast\ast}$ is norm separable (i.e., take $E=Z^{\ast\ast}$ and $F=Z$ above). Moreover, this implies that $Z^\ast$ is norm separable. In particular, $Z$ and $Z^{\ast\ast}/Z$ both have separable dual, hence taking again $E=Z^{\ast\ast}$ and $F=Z$ above and applying the fact that norm separability of the dual is a three-space property, we conclude that $Z^{\ast\ast\ast}$ is norm separable, as claimed.
Orientation and sign of rotation matrix for projective transformation
You’re not doing anything wrong per se. Your rotation maps $Q'$ to a point on the negative $x$-axis; the book solution instead maps it to the positive $x$-axis. Both lead to the same matrix $M$ in the end, as you can verify for yourself by multiplying out $R'TR$ for both the book’s and your chosen rotations.
Finding the probability of a minimised dice roll with a reroll.
I wonder why you've excluded $X=11$ and $X=12$. If the lower two dice have the same value $k$, and thus a sum of $2k$, there is $1$ result in which the highest die is also $k$ and $3(6-k)$ results in which the highest die is higher, for a total of $19-3k$. If the lower two dice have different values $k\lt l$, and thus a sum of $k+l$, there are $3$ results in which the highest die is also $l$ and $6(6-l)$ results in which the highest die is higher, for a total of $39-6l$. Thus, for odd $n\le 7$, the number of ways to get a sum of $n$ from the lower two dice is \begin{eqnarray} \sum_{k=1}^{(n-1)/2}(39-6(n-k)) &=& \frac{(39-6n)(n-1)}2+6\cdot\frac12\cdot\frac{n-1}2\cdot\frac{n+1}2 \\ &=& \frac{2(-6n^2+45n-39)+3(n^2-1)}4\;, \\ &=& \frac94(9-n)(n-1) \end{eqnarray} whereas for $n=9$ it's $39-6\cdot6+39-6\cdot5=12$ and for $n=11$ it's $39-6\cdot6=3$. For even $n\le6$, the number of ways to get a sum of $n$ from the lower two dice is \begin{eqnarray} 19-3\cdot\frac n2+\sum_{k=1}^{n/2-1}(39-6(n-k)) &=& 19-3\cdot\frac n2+(39-6n)\left(\frac n2-1\right)+6\cdot\frac12\cdot\left(\frac n2-1\right)\cdot\frac n2 \\ &=& \frac{4\cdot19-6n+(39-6n)(2n-4)+3(n-2)n}4\;, \\ &=& \frac{-9n^2+90n-80}4\;, \end{eqnarray} whereas for even $n\ge8$ the number of ways to get a sum of $n$ from the lower two dice is \begin{eqnarray} 19-3\cdot\frac n2+\sum_{k=n-6}^{n/2-1}(39-6(n-k)) &=& \frac{-9n^2+90n-80}4-\sum_{k=1}^{n-7}(39-6(n-k)) \\ &=& \frac{-9n^2+90n-80}4-(39-6n)(n-7)-6\cdot\frac12(n-7)(n-6) \\ &=& \frac{-9n^2+90n-80-4(39-6n)(n-7)-12(n-7)(n-6)}4 \\ &=&\frac{3n^2-78n+508}4 \end{eqnarray} To summarize, we have the following numbers $a_n$ of ways of getting a sum of $n$ in the lower two dice: \begin{array}{c|c} n&a_n\\\hline 2&16\\ 3&27\\ 4&34\\ 5&36\\ 6&34\\ 7&27\\ 8&19\\ 9&12\\ 10&7\\ 11&3\\ 12&1 \end{array} The sum is $6^3=216$, as it should be. Now add up the numbers $a_n$ for $n\le X$ and divide by $216$ to obtain the probabilities $p_X$ for obtaining a sum less than or equal to $X$ with the lower two dice: \begin{array}{c|c} X&p_X\\\hline 2&\frac{16}{216}\\ 3&\frac{43}{216}\\ 4&\frac{77}{216}\\ 5&\frac{113}{216}\\ 6&\frac{147}{216}\\ 7&\frac{174}{216}\\ 8&\frac{193}{216}\\ 9&\frac{205}{216}\\ 10&\frac{212}{216}\\ 11&\frac{215}{216}\\ 12&\frac{216}{216} \end{array} As noted by @kccu in a comment, you effectively have two tries to get $X$ or less, so the probability to succeed is $1-(1-p_X)^2=2p_X-p_X^2$.
Numerical integration in Matlab (Gaussian 3 point quadrature)
I wouldn't modify the code except to suppress output on the last line of gauss3.m. All you need is a function that chops up $[a,b]$ into $N$ subintervals and invokes gauss3.m $N$ times and adds up the results: % gaussq.m function y = gaussq(func,a,b,N); h = (b-a)/N; y = 0; for i = 1:N, y = y+gauss3(func,a+(i-1)*h,a+i*h); end Then you want a script that invokes the above for various values of $N$ and compares results until it's satisfied that the error criterion is met: % testg.m a = 0; b = 1; y0 = 100; for N = logspace(0,6,7), y1 = gaussq(@f,a,b,N); err = abs((y1-y0)/y0); if err < 1.0e-10, break; end y0 = y1; end fprintf('Integral = %17.15f, intervals = %d, error = %e\n',y1,N,err); Hopefully after you analyze the above example to see how it operates you will be able to write your own, much better, code.
Proofs on equilateral triangles
Let one of the equal angles be $x$ So, the angles will be $x,x,\pi-2x$ So, using Law of Sines, $$\frac a{\sin x}=\frac b{\sin x}=\frac c{\sin(\pi-2x)}=2R$$ where $R$ circum-radius which is constant So, the sides are $2R\sin x, 2R\sin x, 2R\sin(\pi-2x)=2R\sin2x$ So, the area will be $$\frac{2R\sin x \cdot 2R\sin x \cdot 2R\sin2x}{4R}=R^22\sin^2x\sin 2x$$ Now, $$2\sin^2x\sin 2x=\cos x\cdot4\sin^3x=\cos x(3\sin x-\sin3x)=\frac{3\cdot 2\sin x\cos x-2\sin3x\cos x}2=\frac{3\sin2x-(\sin4x+\sin2x)}2=\sin2x-\frac{\sin4x}2$$ Using Second Derivative Test prove this will be maximum if $x=\frac\pi3$ as $0<x<\frac\pi2$ as $\pi-2x>0$
Find $f(x)$ such that it produces a hyperbolic curve that passes through $(0,t)$ and $(t,0)$
There are infinitely many families of hyperbolic curves that can pass through any given two points. (in general, you need five points to uniquely determine a quadratic curve) Your proposed family of hyperbola $\displaystyle y=\frac a{x+b}+c~$ has the two asymptotes that are vertical and horizontal. In general, the two asymptotes can have any slopes. Of course, your hyperbola family is already more than enough. The result $b = -c$ is correct, which means in fact you have a two-parameter family $$ y=\frac{a}{x-c}+c = \frac{a + cx - c^2}{ x - c}$$ where any real-valued $a$ and $c$ render a hyperbola with the desired property (passing through those two points). The quoted "known solution" is just the special case when one takes $c = -t$ and $a = 2t^2$.
Structure of a mapping comes from the Codomain?
It is not $f(x)+g(x)$, but $h=f+g$, with $h:A\rightarrow R$ such as $h(x)=f(x)+g(x)\in R$ by the properties of the ring for $x\in A$.
Can a first-order autonomous differential equation have a single steady state at x=0 that is not approached exponentially fast?
Since you want to have something like $x(t)=t^{-p}$ for $t\to\infty$, instead of exponential dependence, consider that $x(t)=t^{-p}$ satisfies $x'(t) = -pt^{-p-1} = -px^{1+1/p}$. Thus, $x'=-x^\alpha$ with $\alpha>1$ does the job. The choice $\alpha=3$ (hence $p=1/2$) is convenient because one does not have to worry about how to raise negative $x$ to power $\alpha$ then.
A Thought on Recursive Sequences
Monotone bounded sequences of real numbers converge to their sup. This is sometimes called the monotone convergence theorem.
Easier way to show $(\mathbb{Z}/(n))[x]$ and $\mathbb{Z}[x] / (n)$ are isomorphic
You don't have to go about building an isomorphism yourself if you make use of the isomorphism theorems. Use the obvious mapping $\phi:\Bbb Z[x]\to (\Bbb Z/(7))[x]$ where you are just reducing the coefficients of polynomials mod $7$. This is clearly onto the latter ring. The kernel is precisely the polynomials whose coefficients are all divisible by $7$, and that's exactly $(7)\lhd \Bbb Z[x]$. By an isomorphism theorem, $\Bbb Z[x]/\ker(\phi)\cong (\Bbb Z/(7))[x]$, and you already know what $\ker(\phi)$ is.
a Problem about Degree of Map between Spheres
If $f$ and $g$ are nowhere orthogonal, we have that the function $\left<f,g\right>$ has fixed sign, suppose it is positive. If it is negative, you can compose $g$ with the antipodal map: the degree may only change by a sign, not affecting the hypothesis. Now, for $t\in [0,1]$ you have the homotopy between $f$ and $g$ $$\frac{(1-t)f+tg}{\left\|(1-t)f+tg\right\|}$$ The fact that $\left<f,g\right>>0$ ensures you that the homotopy is well defined (you may have some problems if $f(x)=-g(x)$ for some $x$, but then $\left<f(x),g(x)\right><0$), hence $\operatorname{deg}f=\operatorname{deg}g$, contradiction. If $n$ is odd, the antipodal map has degree $+1$, hence it's enough to know that $\operatorname{deg}f\neq\operatorname{deg}g$ to get a contradiction.
Optimization, solving for the 'error' coefficient
$$ \hat Y = \exp(\beta_0 + \sum\beta_ix_i + \varepsilon)*F \\ \hat Y / F = \exp(\beta_0 + \sum\beta_ix_i + \varepsilon) \\ \ln (\hat Y / F) = \beta_0 + \sum\beta_ix_i + \varepsilon \\ \ln (\hat Y / F) - \beta_0 - \sum\beta_ix_i = \varepsilon $$
Absolute values of a linear combination of three random variables
For any random variable $U$ and constant $a>0$: $$ P(|U|\geq a)=P(U\leq -a)+P(U\geq a). $$
Smoothness from elliptic theory
From the regularity theory for quasilinear elliptic partial differential equations of second order. See Gilbarg, Trudinger, Partial Differential Elliptic Equatinons second order for the general theory. I don't know whether that covers the result you are citing in full strength, but it will get you started. This is well known but non trivial (a lot of machinery) you need ot know.
show the integral $\int_{0}^{1}\frac{\ln(\sin x)}{\sqrt x}$ converges
The only problem could be around $x=0$. Close to $x=0$, $\sin(x)\sim x$, $\log(\sin(x))\sim \log(x)$ so, we face more or less $$\int \frac {\log(x)}{\sqrt x}\,dx$$ Integrate by parts $$\int \frac {\log(x)}{\sqrt x}\,dx= \sqrt{x}\, (2 \log (x)-4)$$ No problem. Edit I do not think that an antiderivative exist. Consider a Taylor expansion around $x=0$ $$\log(\sin(x))=\log (x)-\frac{x^2}{6}-\frac{x^4}{180}-\frac{x^6}{2835}-\frac{x^8}{37800}+O\left(x^{10}\right)$$ and integrate termwise. Look what happens when you expand the logarithm to $O(x^{2n})$ $$\left( \begin{array}{ccc} 0 & -4 & -4.00000000000 \\ 1 & -\frac{61}{15} & -4.06666666667 \\ 2 & -\frac{659}{162} & -4.06790123457 \\ 3 & -\frac{299849}{73710} & -4.06795550129 \\ 4 & -\frac{50974369}{12530700} & -4.06795861364 \\ 5 & -\frac{7065047897}{1736755020} & -4.06795881724 \\ 6 & -\frac{1018997296451}{250493512500} & -4.06795883167 \\ 7 & -\frac{384161980864027}{94436054212500} & -4.06795883275 \\ 8 & -\frac{25354690737550247}{6232779578025000} & -4.06795883284 \\ 9 & -\frac{3368801694231686097349}{828130724193447675000} & -4.06795883284 \\ 10 & -\frac{690604347317590078188457}{169766798459656773375000} & -4.06795883284 \end{array} \right)$$
Find all real eigenvalues, then find a basis for each eigenspace, and diagonalize, having problems when $\lambda =$ square root
The kernel has to be nontrivial, so you can always conclude (without any calculations) that \begin{align} \ker(A-\lambda_{\pm}I)=\ker\begin{pmatrix} 2-\lambda_{\pm} & 3\\ 0 & 0 \end{pmatrix} \end{align} With a short calculation, you get \begin{align} \ker\begin{pmatrix} 2-\lambda_{\pm} & 3\\ 0 & 0 \end{pmatrix}=\left\langle\ \begin{pmatrix}\frac{-3\pm\sqrt{57}}{8} \\ 1 \end{pmatrix}\right\rangle \end{align}
Prove that the Tangent Plane of a Surface is a Vector Space from Definition
Here's a slightly more pedestrian solution to your particular problem. You've started with curves $\alpha$ and $\beta$ defined on a small interval, which I'll call $I$ to save me from writing greek letters. The image $\alpha(I)$ (if $I$ is small enough) is contained in an open set $U$ of the kind that appears in the definition of "regular surface", and we can assume that associated to $U$ is a parameterization $F$. For each $t$, define $$ a(t) = F^{-1} (\alpha(t)) $$ Then $a$ is a curve in the plane, right? And $$ \alpha = F \circ a. $$ Similarly, you can define $b$, with $$ \beta = F \circ b. $$ Now it's easy to define $\gamma$ ... you add $a$ and $b$ instead of trying to add $\alpha$ and $\beta$: $$ \gamma(t) = F(a(t) + b(t)) $$ and if $I$ is small enough, then $a(t) + b(t)$ will always be in the domain of $F$, so this makes sense. Now what's $\gamma'(0)$? Well, if there's any justice in the world, it'll be $\alpha'(0) + \beta'(0)$. And indeed it is that. The proof is to throw in one more function $$ P : \Bbb R \to \Bbb R^2 : t \mapsto a(t) + b(t) $$ so that $$ \gamma = F \circ P, $$ and now you can apply the chain rule. This is really just @LeeMosher's answer with a lot more detail, but I hope it's of some help to you.
Lie algebras of reductive groups
Edit: There are some issues with the ideas in this answer discussed in the comments. As was already discussed in the comments, 1. is clear if $[G,G]$ is a direct product of simple algebraic groups, in particular, if $[G,G]$ is simply connected. Note that for any semisimple group $G$, we have an isogeny $\pi : G_{sc} \rightarrow G$ inducing an isomorphism on the Lie algebras (this is Proposition 9.15 in Malle-Testerman), so $\text{Lie}(G) \cong \text{Lie}(G_{sc})$. We have $[G,G] = G_1\cdots G_n$ a product decomposition into simple algebraic groups $G_i$ and consequently $[G,G]_{sc} = (G_1)_{sc} \times \cdots \times (G_n)_{sc}$. Using the isomorphisms between Lie algebras induced by $[G,G]_{sc} \rightarrow [G,G]$ and $(G_i)_{sc} \rightarrow G_i$ we get $$ \text{Lie}([G,G]) \cong \text{Lie}([G,G]_{sc}) \cong \prod_{i = 1}^n \text{Lie}((G_i)_{sc}) \cong \prod_{i = 1}^n \text{Lie}(G_i)$$ and this gives 1. For any connected reductive $G$ we have a surjective morphism $[G,G] \times Z(G) \rightarrow G$ of algebraic groups given by multiplication. The kernel of this morphism is finite, hence its Lie algebra is trivial (e.g. Thm. 7.4 a in Malle-Testerman) giving an isomorphism of Lie algebras (by Thm. 7.9 in Malle-Testerman) $$\text{Lie}(G) \cong \text{Lie}([G,G] \times Z(G)) \cong \text{Lie}([G,G]) \times \text{Lie}(Z(G)).$$
On deriving the arclength of a hyperbola
Using the substitution $x=\sin(u)$, you get your problem in the form $\int\!dx\, \frac{\sqrt{(1-mx^2)(1-x^2)}}{x^2 (1-x^2)}$. Legendre studied the problem of solving integrals which involve rational functions of $x$ and the square root of a polynomial of degree less or equal four in $x$. You will find a recipe on the page http://everything2.com/title/elliptic+integral+standard+forms. Following the argument, noting that in your case $A=0, B=1, C=x^2-x^4, D=1, R=(1-mx^2)(1-x^2)$, you will obtain the result as a function of the three elliptic integrals.
Proving the integral of a bounded function with finite discontinuities exists? (example)
Your idea is prefectly fine, and it is not difficult to implement it in full rigor. If we have a finite number of points $x_1,x_2,\ldots,x_K$ in $[0,1]$, for any $N\in\mathbb{N}$ large enough the intervals with length $\frac{1}{N}$ centered at them do not overlap, hence for any uniform partition of $[0,1]$ in $2N$ subintervals, each subinterval contains at most one "troublesome" point. Let $M$ be the maximum of $f$ over $\{x_1,x_2,\ldots,x_K\}$. The lower Riemann sum associated with the previous partition is zero, and the value of the upper Riemann sum is bounded by $\frac{KM}{2N}$, which goes to zero as $N\to +\infty$. This proves that the Riemann integral of $f$ over $[0,1]$ is zero.
Is the closure of the unit ball in $C^2[0,1]$ compact in $C^1[0,1]$?
For completeness, I'll modify the answer in Is there a reference for compact imbedding of Hölder space? to fit this situation. Take a bounded family of functions $(u_n)$ in $C^2([0,1])$. This implies $u_n$, $u_n'$, and $u_n''$ are uniformly bounded. By the mean value theorem, $u_n$ and $u_n'$ are equicontinuous. By the Ascoli-Arzelà theorem, we can extract a subsequence $v_k = u_{n_k}$ such that $v_k$ and $v_k'$ converge uniformly. This subsequence is Cauchy in $C^1[0,1]$ and therefore converges.
Diagonalizing the X and Z matrices
For the matrix $Z$, there is an eigenvalue $0$ associated with the eigenvector $e_1-e_n$ (where $n$ is the dimension of $Z$ and $e_i$ denotes the $i$th column of the identity matrix), an eigenvalue $2$ with the eigenvector $e_1+e_n$. Next, there an eigenvalue $1$ with the multiplicity $\lceil (n-2)/2\rceil$ and eigenvectors $e_1+e_n-\frac{1}{2}(e_{i+1}+e_{n-i})$, $i=1,\ldots,\lceil (n-2)/2\rceil$. Finally, the eigenvalue $-1$ has the multiplicity $\lfloor (n-2)/2\rfloor$ and corresponding eigenvectors $e_{i+1}-e_{n-i}$, $i=1,\ldots,\lfloor (n-2)/2\rfloor$. Similarly for $X$. There are two or three groups of eigenvalues (depending on whether the matrix dimension $n$ is even or odd). Try the eigenvectors $v_i=e_i+e_{n-i+1}$, $i=1,\ldots,\lfloor n/2\rfloor$, and $v_i=e_i-e_{n-i+1}$ for $i=\lceil n/2+1\rceil,\ldots,n$ corresponding to the eigenvalues $a+b$ and $a-b$. For $n$ odd, the "middle" eigenvector can be chosen simply to be $v_{\lceil{n/2}\rceil}=e_{\lceil{n/2}\rceil}$ with the eigenvalue equal to whatever is on the diagonal there.
$ \int_{0}^{1}x^m.(1-x)^{15-m}dx$ where $m\in \mathbb{N}$
$$f(m,n)=\int_0^1 x^m(1-x)^ndx$$ Repeated partial integration on the right hand side reveals that: $$(m+1)f(m,n)=n\,f(m+1,n-1)$$ $$(m+2)(m+1)f(m,n)=n(n-1)\,f(m+2,n-2)$$ $$\cdots$$ $$(m+n)\cdots (m+1)\,f(m,n)=n!\,f(m+n,0)$$ $$\text{i.e}$$ $$(m+n)!f(m,n)=n!\,m!\,f(m+n,0)$$ Since $$f(m+n,0)=\int_0^1 t^{m+n}dt=\frac{1}{m+n+1}$$ We have: $$f(m,n)=\frac{n!\,m!}{(m+n+1)!}$$ Now simply let $n=15-m$
Mandelbrot set incorrect picture
In the loop while(count<N && complexC*complexC+realC*realC<=4.0f) { complexC = realC*realC-complexC*complexC + complex; realC = 2*complexC*realC + real; count++; } You use the updated complexC to compute the new realC, but you ought to use the old one: float oldC = complexC; complexC = realC*realC-complexC*complexC + complex; realC = 2*oldC*realC + real; count++; Besides, you seem to have flipped the real and imaginary parts. I think that only rotates the picture, though.
We toss three coins (each with pr(heads)= p). Let X be the number of heads that occur on the first two tosses and Y be the number of heads..
There are only $8$ outcomes of the first $3$ tosses, so you can just list them out: $TTT$ - $(X,Y) = (0,0)$ - prob $(1-p)^3$ $TTH$ - $(X,Y) = (0,1)$ - prob $p(1-p)^2$ $THT$ - $(X,Y) = (1,1)$ - prob $p(1-p)^2$ $THH$ - $(X,Y) = (1,2)$ - prob $p^2(1-p)$ $HTT$ - $(X,Y) = (1,0)$ - prob $p(1-p)^2$ $HTH$ - $(X,Y) = (1,1)$ - prob $p^2(1-p)$ $HHT$ - $(X,Y) = (2,1)$ - prob $p^2(1-p)$ $HHH$ - $(X,Y) = (2,2)$ - prob $p^3$ Now, you should be able to complete the table.
Three questions that I failed to approach properly (statements, true or false).
(1) We want to know which values of $x$ will make the second series converge. If $x\le1$ or $x\ge1$, then $x^2+1\ge2$, so we're plugging a value into the first series that's not in $[-2,2)$, so it diverges. On the other hand, if $x\in(-1,1)$, then $x^2+1\in[1,2)$, which is a subset of $[-2,2)$, so we're plugging in a value that makes the series converge. In other words, $(-1,1)$ is exactly the set of $x$ values for which $x^2+1\in[-2,2)$. This question isn't really about power series at all, just sets and functions. (2) You're right that $f$ might not be differentiable at $0$, but it is necessarily continuous. The left and right derivatives both take into account the value $f(0)$ (recall the limit definition of the derivative), and for both derivatives to exist, $f(x)$ must approach $f(0)$ as $x$ approaches $0$ from either side. That's exactly what it means for $f$ to be continuous at $0$. (3) Your idea for (b) is exactly right! A counterexample is $f(x)=1+\sin(e^x)/x$. For a similar reason, (a) is false: picture something like $f(x)=x^2+\sin(e^x)$. Your mental image for (c) is probably right too. Since $f'(x)$ converges to 8, the slope of $f$ at some point becomes larger than 7 and stays that way forever, so $f$ increases without bound.
Find if the polynomials are separable: $\mathbb{Z}_3[x], f(x) = x^6+x^5+x^4+2x^3+2x^2+x+2$, ...
As I stated in the comments, the quick way to test if a polynomial $f \in K[x]$ (where $K$ is a field) is separable is to compute $\gcd(f, f')$. One can see that this is equivalent to your definition of separable as follows. Fix an algebraic closure $\overline{K}$ and let $d$ be the degree of $f$. To say that $f$ has $d$ distinct roots over $\overline{K}$ means that $f$ has no multiple roots, i.e., $f$ is squarefree. One can show that a root $\alpha$ of $f$ is a multiple root iff $\alpha$ is also a root of $f'$ (the formal derivative of $f$). This is equivalent to saying that $x-\alpha$ is a factor of both $f$ and $f'$, hence also of $\gcd(f,f')$. Thus we see that $f$ is separable iff $\gcd(f,f') = 1$. Let's see how this applies to your first example. Since $3 = 0$ in $\mathbb{F}_3$ ($=\mathbb{Z}/3\mathbb{Z}$), then $f' = 2 x^4 + x^3 + x + 1$. We compute $\gcd(f,f')$ using the Euclidean algorithm: \begin{align*} f &= (2 x^{2} + x) f' + 2 x^{2} + 2\\ f' &= (x^{2} + 2 x + 2)(2x^2 + 2) + 0 \, . \end{align*} Thus we see that $\gcd(f,f') = 2 x^{2} + 2$, or $x^2 + 1$ if you prefer a monic $\gcd$. This shows that $f$ is not separable: the roots of $x^2+1$ are multiple roots of $f$. One can see this explicitly by factoring $f$. Using Sage, I found that $f$ splits completely over $\mathbb{F}_3(i) \cong \mathbb{F}_9$, factoring as $$ f = (x + i - 1) (x - i - 1) (x + i)^{2} (x - i)^{2} $$ where $\pm i$ are the roots of $x^2 + 1$. Sometimes you can determine if $f$ is squarefree by inspection. In (c), $f = x^3 - 1 = (x-1)(x^2 + x + 1)$. Thus $f$ is separable iff $x^2 + x + 1$ is separable. You can show this is the case by again using the Euclidean algorithm to compute $\gcd(f,f')$. Alternatively, $x^2 + x + 1$ is irreducible over $\mathbb{F}_2$ (degree $2$ and no roots) and since $\mathbb{F}_p$ is a perfect field for every prime $p$, then irreducible polynomials over $\mathbb{F}_p$ are always separable. (d) is easier to see. By the freshman's dream, $$ f = x^6 - 1 = (x^3)^2 - 1 = (x^3 - 1)^2 $$ so $f$ is not squarefree, hence not separable.
What techniques would be use to prove $6x^2 +12x +8$ cannot be perfect cube for integer x > 0
This is a direct result of Fermat's last theorem if you observe that $(x+2)^{3} = (x^3) + (6x^2 + 12x + 8)$
Find the condition that one root of $ax^2+bx+c=0$ be the reciprocal of a root of $a_1x^2+b_1x+c_1=0$.
In general if $\alpha$ and $\beta$ are the roots of equation $ax^2+bx+c=0$, then, Sum of the roots, $\alpha+\beta=-\dfrac{b}{a}$, and Multiplication of roots, $\alpha\beta=\dfrac{c}{a}$. So, $\dfrac{\alpha+\beta}{\alpha\beta}=\dfrac{-b/a}{c/a}=\dfrac{-b}{c}\implies\dfrac{1}{\alpha}+\dfrac{1}{\beta}=\dfrac{-b}{c}$, and $\dfrac{1}{\alpha\beta}=\dfrac{a}{c}\ \hspace{20pt}\cdots(\text{i})$ Let, $\alpha_1$ and $\beta_1$ are the roots of equation $a_1x^2+b_1x+c_1=0$ such that $\alpha_1=\dfrac{1}{\alpha}$ and $\beta_1=\dfrac{1}{\beta}$, as they are reciprocal. Now from (i) we can write that $\alpha_1+\beta_1=-\dfrac{b}{c}$ and $\alpha_1\beta_1=\dfrac{a}{c}$. So the equation whose sum of roots is $=-\dfrac{b}{c}$ and multiplication of roots is $=\dfrac{a}{c}$ can be written as: $x^2-(\alpha_1+\beta_1)x+\alpha_1\beta_1=0\implies x^2+\dfrac{b}{c}x+\dfrac{a}{c}=0\implies cx^2+bx+a=0$. Now this final equation $cx^2+bx+a=0$ and $a_1x^2+b_1x+c_1=0$ are same. So comparing we get, $\dfrac{a_1}{c}=\dfrac{b_1}{b}=\dfrac{c_1}{a}$.
Draw 3d double integral in Maple
plot3d(x/sqrt(x^2+y^2), x=0..3, y=x/3+2..3+sqrt(9-x^2), axes=box, orientation=[-120,30,0]); plot3d(x/sqrt(x^2+y^2), x=0..3, y=x/3+2..3+sqrt(9-x^2), axes=box, orientation=[-120,30,0], filled=true);
If $a_1= \sqrt{6}$ and $a_{n+1} = \sqrt{6 + a_n}$, what is the limit of $a_n$ as $n$ goes to infinity?
If this series is ever to converge to a value $L$, then we'd have: $$ L = \sqrt{6+L} $$ Which implies $L>0$ and then $L=3$. Because any $a_n$ is non-negative the sequence is limited, it remains to show that it does converge, you can check it by seeing that it is monotonic.
Ideals in a ring with identity
By definition a subring must contain the identity element of the ring. If $I$ does not contain $1$ then by definition $I$ is not a subring. This doesn't mean that $I$ is not a ring. For example $\mathbb{R \times R}$ (pairs of real numbers) is a ring. The multiplication and addition are $(a, b) + (c, d) = (a + c, b + d)$ and $(a, b)(c, d) = (ac, bd)$. The set of pairs $I = \{(a, 0) \ | \ a \in \mathbb R\}$ is an ideal in this ring and it is certainly a ring in its own right (it's the ring $\mathbb R$). But it's not a subring of $\mathbb{R \times R}$ because the unit in $\mathbb{R \times R}$ is $(1, 1)$ and this element is not contained in $I$. Another way of putting this is to say that a subset $I \subseteq R$ of a ring is a subring if $I$ is a ring and the inclusion map $I \hookrightarrow R$ is a homomorphism of rings. The inclusion $I \hookrightarrow \mathbb{R \times R}$ respects multiplication and addition, but it does not map the identity element $(1, 0) \in I$ to the identity element $(1, 1) \in \mathbb{R \times R}$ so it is not a ring homomorphism.
Why choosing 1 or 0 for the dont care values give different function in a Karnaugh Map?
We assign the value $1$ to a don't care in order to try and increase the size of prime implicants, thus reducing the complexity and cost of the circuitry. Don't cares by definition do not affect the behavior of a combinational circuit, so treating them as $1$'s and $0$'s is good. On the other hand, if you're designing a sequential circuit, problems in wiring may cause your machine to jump to an unexpected state, if don't cares are treated as such, and not assigned a value of $0$ and $1$ then the machine will be stuck in that state, and will not function properly. You can either set the don't cares in a way to always return to the initial state in the loop (costly, but a time-saver), or treat them as $1$'s and $0$'s to increase prime implicants (cheaper, but has to go through the loop from an arbitrary point before returning to start). P.S. : I'm not sure if the question is on topic here.
Solving $\displaystyle x(z-2y^2)\frac{\partial z}{\partial x}=\left(z-\frac{\partial z}{\partial y}\right)(z-y^2-2x^3)$.
Please take a look at the solution of this question and this question The goal is to prove that $$d\left(\frac{y}{x}\right)=0,d\left(\frac{z}{x}-\frac{y}{x}+x^2\right)=0$$
valid or invalid argument - contradicting arguments
The argument is invalid. Here is a refutation by logical analogy: Some coins are dimes All nickels are coins Therefore, some nickels are dimes The argument based on formal logic notation fails, since it uses the wrong symbolizations. For example, some rational numbers are powers of five needs to be symbolized as: $$\exists x (Q(x) \land R(x))$$ and not as: $$\exists x (Q(x) \rightarrow R(x))$$ So ... either the text was asking you to find the error in the 'Solution' ... or the text provided a horribly mistaken Solution! Given how everything else labeled 'Solution' seems to be treated as the actual answerk to the exercises, I fear it's the latter .. what text is this?!
Finding a paremetric curve traced out by certain lines around a circle
The angle $OXT$ (oops -- typo: I meant $OTX$) is a right angle. That means that $$ \cos(t) = \frac{OT}{OX} $$ so $$ OX = \frac{radius}{\cos(t)}. $$ I think that's what you needed, yes?
Exact sequence of groups to exact sequence sheaves
Exactness at $\underline{F^X}$ and $\underline{G^X}$ is clear, so you are asking about surjectivity at $\underline{H^X}$. In other words, given a point $x \in X$, an open set $U \ni x$ and a continuous map $\phi : U \to H$, when can I shrink $U$ to $V \ni x$ and lift $\phi$ to $\psi: V \to G$? I think the right condition is There is a nonempty open set $W$ in $H$, and a continuous section $\sigma: W \to G$ of the map $G \to H$. This implies that $\underline{G^X} \to \underline{H^X}$ is surjective. Proof: Let $(x, U, \phi)$ be as above. Using the group structure on $G$ and $H$, we may translate $W$ to contain $\phi(x)$ and adjust our section to still be a section. Now, take $V = \phi^{-1}(W)$. Since $\phi$ is continuous, $V$ is open, and $\sigma \circ \phi$ gives the desired lift of $\phi$. $\square$ In the other direction, if the boxed condition does not hold, then the identity map in $\underline{H^H}$ is not in the image of $\underline{G^H}$. EDIT Oh, I see that Eric Wofsey has written the same thing back on MO.
Integration with trigonomitry
Are you requested to find the red-coloured area? Then it's simply $$A=\int_{2\arcsin\left(\frac14\right)} ^{\pi} 2\sin(x)-\int_{2\arcsin\left(\frac14\right)} ^{\pi} \cos\left(\frac{x}{2}\right)\\=\left[-2\cos(\pi)+2\cos\left(2\arcsin\left(\frac14\right)\right)\right]-\left[2\sin(\frac{\pi}{2})-2\sin\left(\frac{2\arcsin\left(\frac14\right)}{2}\right)\right]\\=2-2+2\sin\left(\frac{2\arcsin\left(\frac14\right)}{2}\right)+2\cos\left(2\arcsin\left(\frac14\right)\right)\\=\frac12+\frac74\\=\frac94=2.25$$
Maximum Value Theorem proof in a eucledian sapce
I guess you mean Weierstrass extreme value theorem. A generalization of its case for the reals proposed in Wikipedia states that if $K$ is a compact set and $f:K\to\Bbb R$ is a continuous function, then $f(K)$ is bounded and there exist $p,q\in K$ such that $f(p)=\sup_{x\in K} f(x)$ and $f(q)=\inf_{x\in K} f(x)$. In Wikipedia continuity of the function $f$ is defined in terms of open sets, but it is easy to show that a function $f$ is continuous according to Wikipedia definition iff $f$ is continuous according to the usual definition in terms of $\varepsilon$ and $\delta$. Also recall that by Heine-Borel Theorem, a subset $S$ of $\Bbb R^n$ is compact iff $S$ is closed (that is, for each $x\in \Bbb R^n\setminus S$ there exists $\varepsilon>0$ such that for each $y\in \Bbb R^n$ with $dX(x,y)<\varepsilon$ we have $y\in \Bbb R^n\setminus S$ too) and bounded (that is, for each $\varepsilon>0$ there exists a finite subset $F$ of $S$ such that for each $x\in S$ there exists $y\in F$ such that $dX(x,y)<\varepsilon$). Therefore we have the following generalization of Weierstrass extreme value theorem. Theorem If $K\subset\Bbb R^n$ is a non-empty closed and bounded set and $f:K\to\Bbb R$ is a continuous function, then $f(K)$ is bounded and there exist $p,q\in K$ such that $f(p)=\sup_{x\in K} f(x)$ and $f(q)=\inf_{x\in K} f(x)$. In order to prove the theorem we use the preservation of compactness by continuous maps. Lemma. If $V, W$ are topological spaces, $F:V\to W$ is a continuous function, and $K\subset V$ is compact, then $f(K)\subset W$ is also compact. Proof. Let $\mathcal W$ be any open cover of the set $f(K)$. By continuity of the map $f$, a family $\{f^{-1}(U):U\in\mathcal W\}$ is an open cover of the set $K$. Since the set $K$ is compact, it has a finite subcover $\{f^{-1}(U):U\in\mathcal W_0\}$. Then $\mathcal W_0$ is a finite cover of the set $f(K)$. $\square$ Proof of Theorem. By Heine-Borel Theorem, $K$ is compact. By Lemma, $f(K)$ is compact. Again by Heine-Borel Theorem, $f(K)$ is closed and bounded in $\Bbb R$. It is easy to show that $f(K)$ is a segment, that is $f(K)=[\inf_{x\in K} f(x), \sup_{x\in K} f(x)]$. Thus $\inf_{x\in K} f(x)\in f(K)$ and $\sup_{x\in K} f(x)\in f(K)$. $\square$
Is the complex form of the Fourier series of a real function supposed to be real?
You are using the formula for the complex fourier coefficients which are usually denoted by $c_n$. These are usually complex, and they lead to the representation: $f_f(x) = \sum_{n=-\infty}^\infty c_n e^{inx}$ This is still (more or less) the original function and is therefore real. There is also a transformation into the sinus-cosinus representation: $f_f(x) = a_0 + \sum_{n=1}^\infty a_n \cos(nx) + b_n \sin(nx)$ Where the $a_n$ and $b_n$ are real if the original function was real. You can even go back and forth between the 'real' and the 'complex' coefficients. This comes from the fact that you can express the sinus as well as the cosinus as $\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$ and $\cos(x) = \frac{1}{2}(e^{ix}+e^{-ix})$. Or the other way around which might be more familiar: $e^{ix} = \cos(x)+i\sin(x)$ You can find all of this including the formulas for converting the real coefficients $a_n,b_n$ to the complex ones $c_n$ and vice versa here: http://mathworld.wolfram.com/FourierSeries.html
$K \overset{\subset}{\to} NK$ subset homorphism of groups?
$K$ is a subgroup of $NK$, so the inclusion $k\in K\mapsto k= 1\cdot k\in NK$ is a group homomorphism (this works for any subgroup of any group). This is what the author calls $K\stackrel{\subset }\to NK$. Another frequent notation is $K\hookrightarrow NK$.
Is each example a subspace or not? And dimension of subspace.
(i) You are right, the upper triangular matrices are a vector subspace of the vector space of $2\times 2$ matrices. But you need to prove this. Nothing hard involved, all we need to do is to show that (a) the sum of two upper triangular matrices is upper triangular, and (b) that an upper triangular matrix times a constant is upper triangular. The full vector space of $2\times 2$ matrices is basically $\mathbb{R}_4$. But instead of writing the entries in the usual $(a,b,c,d)$ format, we write them as a $2\times 2$ array. So the space of $2\times 2$ matrices has a basis consisting of the $4$ matrices that have a $1$ in one place, and $0$'s elsewhere. Three of these basis elements are upper triangular. So the space of upper triangular matrices has dimension at least $3$. And the dimension is definitely not $4$. The name $U_2$is harmless, just a name. (ii) The name is $GL_2$, meaning the general linear group. But don't worry about the name. It is, however, a standard name, and some other year you may learn more about $GL_n$. In this case, we have been told just enough about it to answer the question. This is not a subspace of the space of $2\times 2$ matrices. To show this, it is enough to show either that there are two invertible matrices whose sum is not invertible, or that there is an invertible $2\times 2$ matrix $M$ and a constant $c$ such that $cM$ is not invertible. Both are easy, and you only need to do one. Think $c=0$. Or think about adding together two diagonal matrices, one the identity matrix and the other $\dots$. (iii) You need to show that the sum of two matrices of the shape described is of the shape described, and that a matrix of the shape described, times a constant, is of the shape described. Basically, this subspace is the same as the set of all $3$-tuples of the shape %(a,b,c,-a)$. To find the dimension, maybe show that the three matrices which if written "flat" are $(1,0,0,-1)$, $(0,1,0,0)$, and $(0,0,1,0)$ are of the right form, and linearly independent, and in our subspaxe. So the space has dimension at least $3$. But the dimension is not $4$, else it would be the whole space of $2\times 2$ matrices.
Can the chromatic number of a graph be defined by the chromatic number of its subgraph(s)?
No, you are correct, the chromatic number of that graph is at least $4$, and your argument is completely valid.
Show that, $2A+B=\zeta(3)$
Fairly straightforward as I see it. $$\begin{align}A=\int_0^{\infty}\left(\frac x{e^x+1}\right)^2dx&=\int_0^{\infty}\left[\frac{x^2}{e^x+1}+x^2\frac d{dx}\left(\frac1{e^x+1}\right)\right]dx\\ &=\left.\frac{x^2}{e^x+1}\right|_0^{\infty}+\int_0^{\infty}\frac{x^2-2x}{e^x+1}dx\\ &=\int_0^{\infty}\frac{x^2-2x}{e^x+1}dx\end{align}$$ Recall that $$\begin{align}\int_0^{\infty}\frac{x^n}{e^x+1}dx&=\int_0^{\infty}\frac{x^ne^{-x}}{1+e^{-x}}dx\\ &=\sum_{k=0}^{\infty}\int_0^{\infty}x^ne^{-x}(-1)^ke^{-kx}dx\\ &=\sum_{k=0}^{\infty}\frac{(-1)^k\Gamma(n+1)}{(k+1)^{n+1}}\\ &=\left(1-\frac1{2^n}\right)n!\zeta(n+1)\end{align}$$ So $$A=\left(1-\frac14\right)(2)\zeta(3)-2\left(1-\frac12\right)(1)\zeta(3)=\frac32\zeta(3)-\zeta(2)$$ Similarly, $$\begin{align}B=\int_0^{\infty}\left(\frac x{e^x-1}\right)^2dx&=\int_0^{\infty}\left[\frac{-x^2}{e^x-1}-x^2\frac d{dx}\left(\frac1{e^x-1}\right)\right]dx\\ &=\left.\frac{-x^2}{e^x-1}\right|_0^{\infty}+\int_0^{\infty}\frac{-x^2+2x}{e^x-1}dx\\ &=\int_0^{\infty}\frac{-x^2+2x}{e^x-1}dx\end{align}$$ And again, $$\begin{align}\int_0^{\infty}\frac{x^n}{e^x-1}dx&=\int_0^{\infty}\frac{x^ne^{-x}}{1-e^{-x}}dx\\ &=\sum_{k=0}^{\infty}\int_0^{\infty}x^ne^{-x}e^{-kx}dx\\ &=\sum_{k=0}^{\infty}\frac{\Gamma(n+1)}{(k+1)^{n+1}}=n!\zeta(n+1)\end{align}$$ So $$B=-(2)\zeta(3)+2(1)\zeta(2)=-2\zeta(3)+2\zeta(2)$$ Adding up, $$2A+B=\zeta(3)$$
Probability - Brick in box
The $20$ cases are given by $C^{6}_{3} = 20$. They are ways of splinting a set of $6$ distinct numbers into $2$ sets of $3$ numbers. Ie. consider $\{a_{1},a_{2},a_{3},a_{4}, a_{5},a_{6}\}$ to be in increasing order. Then the $5$ good cases are: $(\{a_{1},a_{2},a_{3}\},\{a_{4}, a_{5},a_{6}\})$ $(\{a_{1},a_{2},a_{4}\},\{a_{3}, a_{5},a_{6}\})$ $(\{a_{1},a_{2},a_{5}\},\{a_{3}, a_{4},a_{6}\})$ $(\{a_{1},a_{3},a_{4}\},\{a_{2}, a_{5},a_{6}\})$ $(\{a_{1},a_{3},a_{5}\},\{a_{2}, a_{4},a_{6}\})$ Edit: there should be ways of proving there are only $5$ without checking them all. Here is one method: $a_1$ must be part of the first set because if it were part of the second set, there is no number smaller than $a_1$, so the brick wouldn't fit. Similarly, $a_6$ must be part of the second set. It remains to split the remaining $4$ numbers $\{a_{2},a_{3},a_{4},a_{5}\}$ into $2$ sets of $2$. We have $C^{4}_{2} = 6$ ways of doing this. Five of those give us the above sets. The sixth one is $(\{a_{4},a_{5}\},\{a_{2},a_{3}\})$ giving $(\{a_{1},a_{4},a_{5}\},\{a_{2},a_{3},a_{6}\})$ which does not describe fitting bricks because $a_{4}>a_{3}$.
Is it correct to consider the integral as the continuous equivalent of summation?
Yes. In fact, if we consider Lebesgue integration, we can view sums as a specific example of an integral (namely, integration wrt counting measure over some countable set). In probability theory, this also means that we can often prove statements for both continuous and discrete random variables at the same time, by viewing the sums we find in the discrete case as integrals as well. This makes the theory a lot more elegant compared to having to repeat the proofs and definitions for the continuous case and discrete case separately.
Existence of additional independent RVs
Suppose $\Omega = \{1, 2 \}$, $\mathcal{F} = 2^{\Omega}$, $P(1) = P(2) = \frac12$. Put $X(\omega) = \omega$. It's easy to see that this is counterexample. Indeed, suppose that $Y$ has the same distribution and $X$ and $Y$ are independent. Then $P(X = 1, Y =1)$ must be equal to $\frac{1}{4}$, but there's no any event in $\mathcal{F}$ with such probability.
Does $\lim\limits_{n\to\infty}\frac{a_{n+1}}{a_n}=1$ imply that $a_n$ convergent?
A counter-example is $a_ n = 2 + \sin(\log(n))$ which is not convergent (it oscillates between $1$ and $3$). But $$ \left\lvert \frac{a_{n+1}}{a_n} - 1 \right\rvert = \left\lvert \frac{\sin(\log(n+1))- \sin(\log(n))}{2 + \sin(\log(n))} \right\rvert \\ \le \left\lvert \sin(\log(n+1))- \sin(\log(n)) \right\rvert = \left\lvert \frac{\cos(\log(x_n))}{x_n}\right\rvert $$ for some $x_n \in (n, n+1)$, using the mean value theorem. It follows that $$ \left\lvert \frac{a_{n+1}}{a_n} - 1 \right\rvert \le \frac{1}{n} \to 0 \, , $$ i.e. $\frac{a_{n+1}}{a_n} \to 1$. Roughly speaking, $(a_n)$ oscillates, but with decreasing frequency, so that the ratio of successive sequence elements approaches one. (This example is taken from $\lim\limits_{n \to \infty} \frac{a_n}{a_{n+1}} = 1 \Rightarrow \exists c \in \mathbb{R}: \lim\limits_{n \to \infty} {a_n} = c$. I copied it here because the other question excludes sequences converging to $\pm \infty$, so it is not an exact duplicate.)
I have done the second direction of the proof. Hopefully, it is true. Please show my mistakes?
Let $(x_1, \dots, x_n)$ be a coordinate system centred on $p$ and define $f^p_i(x) = x_i$ locally around $p$ and multiply it by a suitable bump function to extend it to a $C^{\infty}$ function over $M$. Then $Y_i = (Y f^p_i)_p = (X f^p_i)_p = X_i$ and so $X_p = Y_p$ for all $p$. Therefore $X = Y$.
Would a theory with model $\{\}$ be consistent?
This is really up to convention. In most cases it is convenient to disallow the empty set from being a model of anything. It helps simplifying a lot of statements, for example: "Every finite partial order has a maximal element" is false if we allow the empty set to be a model of the theory of partial orders; another example would be the inference rule $\forall x\varphi\rightarrow\exists x\varphi$ (which in turns implies that the empty set is never a model of a consistent theory, since $\forall x(x=x)\rightarrow\exists x(x=x)$, so in particular there is some $x$ in the universe). On the other hand, in cases like ordinals and the likes, it is convenient to have the empty set as a partial order, as it simplifies a lot of things when dealing with ordinals. And certainly there are other examples of this nature elsewhere.
Difference between Verify, Prove and Argue
OK so there are several possible mathematical contexts. Consider the sentence "I will prove the Orbit-Stabiliser Theorem." Another way of saying the same thing is "I will argue mathematically that the Orbit-Stabiliser Theorem is true." Because a proof is precisely a mathematical argument that something is true. So the Stanford lecturer you mentioned could have said "I will prove that 5 is a prime number", and it would mean the same thing. So the words are not interchangeable in the sense that you can't say "I will argue the Orbit-Stabiliser Theorem", but if you change the syntax, then you can avoid using the word "proof". Now the syntax and context really matter, because "argue" is a more versatile word than "prove". For example, we often say "assume X and argue for a contradiction". This means that we will try to prove that X is false. So in general when you see "we will argue..." you should read it as "we will make a series of logical steps", and then you should use the context to see what is being argued. Now "verify" means something very specific: "Verify Theorem X" never means "prove theorem X" (unless the proof involves checking one particular case, in which case it will be clear what is meant). Let's say that Theorem X says something about numbers in the set Y. The context is almost always that you have proved Theorem X, and as an example to help your understanding, you will be asked to "Verify theorem X for the case y$\in$Y". And this simply means to check the truth of Theorem X for the number y. So, to reiterate, verifying a theorem almost never proves the theorem. In summary, you will see all three words used in the context of proofs, but they are not precisely interchangeable. P.S. გაუმარჯოს შოტლანდიიდან!
Is the colimit of an expanding sequence of $T_4$ spaces $T_4$?
The Tietze extension theorem does not require points to be closed, so your "cute argument" works just as well for $T_4$ spaces, assuming all the inclusions are closed. In detail, suppose $A,B\subseteq X_\infty$ are disjoint closed subsets. By Urysohn's lemma, there is a continuous function $f_1:X_1\to[0,1]$ such that $f_1$ is $0$ on $A\cap X_1$ and $1$ on $B\cap X_1$. We can then continuously extend $f_1$ to $X_1\cup((A\cup B)\cap X_2)$ by making it $0$ on all of $A\cap X_2$ and $1$ on all of $B\cap X_2$, and then by the Tietze extension theorem this extends continuously to a function $f_2$ on all of $X_2$. We can similarly extend $f_2$ to $f_3:X_3\to [0,1]$ that is $0$ on $A\cap X_3$ and $1$ on $B\cap X_3$ and so on. Gluing together all these functions gives a continuous $f:X_\infty\to [0,1]$ which is $0$ on $A$ and $1$ on $B$. Here is a counterexample if the inclusions are not closed. Let $$X_\infty=(\mathbb{N}\times\{0,1\})\cup\{a\}$$ with the following topology. A set $U\subseteq X_\infty$ is open iff it satisfies the following conditions: If $a\in U$, then $(n,0)\in U$ for all but finitely many $n\in\mathbb{N}$. If $(n,1)\in U$ then $(n,0)\in U$. Note that $X_\infty$ is not $T_4$, since $\{a\}$ and $\mathbb{N}\times\{1\}$ are disjoint closed sets which do not have disjoint neighborhoods (any neighborhood of either must contain $(n,0)$ for all but finitely many $n$). Now let $$X_n=(\mathbb{N}\times\{0\})\cup\{a\}\cup (\{0,\dots,n-1\}\times\{1\})\subset X_\infty.$$ Clearly $X_\infty$ is the union of the $X_n$, and it is easy to see that it is in fact the colimit of them (if $U$ fails to satisfy one of the conditions above for being open, then that is detected in some $X_n$). I claim moreover that each $X_n$ is $T_4$. Indeed, $X_n$ is just the disjoint union of $(\{k\in\mathbb{N}:k\geq n\}\times\{0\})\cup\{a\}$ (which is a compact Hausdorff space, homeomorphic to $\mathbb{N}\cup\{\infty\}$) and $n$ 2-point spaces (namely, $\{(k,0),(k,1)\}$ for $k=0,\dots,n-1$). Any 2-point space is $T_4$, and a disjoint union of $T_4$ spaces is $T_4$, and thus each $X_n$ is $T_4$. (You can get a similar example in which points are closed by replacing each point $(n,0)$ with an infinite set of points that accumulate at $(n,1)$. Or for a similar example that is more geometrical, let $X_n=[0,1]^2\setminus((0,1/n)\times\{0\})$ as a subspace of $\mathbb{R}^2$. The colimit is then $[0,1]^2$, but with its topology modified so that $(t,0)$ does not approach $(0,0)$ as $t$ approaches $0$. This is not normal since $\{(0,0)\}$ and $(0,1]\times\{0\}$ are disjoint closed sets that don't have disjoint neighborhoods.)
Centroid of a polygon with weighted vertices
Let us compute the centroid of a triangle having a surface density linearly interpolated between the vertices. WLOG, the unit triangle by $(0,0),(0,1),(1,0)$ will do. The density function is $w_{xy}=w_{00}+x(w_{10}-w_{00})+y(w_{01}-w_{00})=ax+by+c$ where the coefficients are deduced from the weights at the corners. For the mass: $$\int_{x=0}^1\int_{y=0}^{1-x}(ax+by+c)\,dy\,dx=\int_{x=0}^1(ax(1-x)+\frac{b(1-x)^2}2+c(1-x))\,dx=\frac a6+\frac b6+\frac c2.$$ For the $x$ moment: $$\int_{x=0}^1\int_{y=0}^{1-x}x(ax+by+c)\,dy\,dx=\int_{x=0}^1(ax^2(1-x)+\frac{bx(1-x)^2}2+cx(1-x))\,dx\\ =\frac a{12}+\frac b{24}+\frac c6.$$ and similarly for $y$. Then the centroid is $$\left(\frac{2a+b+4c}{4a+4b+12c},\frac{a+2b+4c}{4a+4b+12c}\right).$$ For an arbitrary triangle, compute the affine transform that maps the canonical triangle to the arbitrary one, and the centroid will just follow. To get the centroid of a triangulation, compute the centroid of the centroids, each weighted by the corresponding triangle mass.
Iterated integral, vertically/horizontally simple
Please see the diagram. If you integrate wrt $y$ first, you will need to do it twice as for $-4 \leq x \leq 0, $ the area bound is parabola whereas for $-9 \leq x \leq -4,$ the area is bound below by the parabola and above by the plane. So instead set it up wrt $x$ first. For the second one, your boundary is correct but what you have shown as lower bound of $x$ should be upper bound otherwise you will get negative value. It should be clear from the diagram. $\displaystyle \int_{-3}^{2} \int_{y-6}^{-y^2} dx \, dy$
best approximation polynomial $p_1(x)\in P_1$ for $x^3$
Let $p(x)$ will be polynomial which you find. First note that $f(x)-p(x)$ is polynomial of degree $3$ with leading coefficient $1$. You know that among the polynomials of degree $3$ with leading coefficient $1$ $$w(x)=\frac{1}{4}T_{3}(x)$$ is the one of which the maximal absolute value on the interval $[−1, 1]$ is minimal, see. So you know that you must have $f(x)-p(x)=\frac{1}{4}T_3(x)$, so $p(x)=x^3-\frac{1}{4}T_3(x)$.
Distributional derivative of a function with jumps
They must mean that $\lim_{x\to a^-}f'(x)$ and $\lim_{x\to a^+}f'(x)$ both exist. To prove that $g$ is continuous at $x=a$ we first must give $g(a)$ a value. To do this we can start with showing that $g(a-) = g(a+)$. But that follows directly from the definition: $$\begin{align} g(a+) - g(a-) &= (f(a+) - \Delta H((a+)-a)) - (f(a-) - \Delta H((a-)-a)) \\ &= (f(a+) - \Delta) - (f(a-) - 0) \\ &= (f(a+) - f(a-)) - \Delta \\ &= 0. \end{align}$$ No, $g' = [f']$ is intended in the sense of ordinary derivatives. Those derivatives are defined and coincide everywhere except at $x=a$.
To which field is this quotient isomorphic $\frac{\mathbb{Z}[x]}{\langle 2x-1, 5 \rangle}$?
The ideal ${\langle 2x-1, 5 \rangle}$ is the kernel of the epimorphism from $\mathbb Z[x]$ to $\mathbb Z_5$ that sends $x$ to $3$ (and $1$ to $1$).
Is it always possible to partition data in such a way that Simpson's paradox is achieved?
There are some additional obvious cases to exclude. For example, if $C=1$ we cannot construct the paradox, because either $c_1/d_1$ or $c_2/d_2$ is zero and not greater than any ratio of natural numbers. If you allow $a_1 = 0$ and $c_1 \geq d_1,$ it seems possible to find the numbers in almost all other cases. For example, for $A=1,$ $B=2,$ $C=2,$ $D=5,$ we have $$ \frac12 > \frac25,$$ but $$ \frac01 < \frac14, \quad \frac11 < \frac21. $$ The case where $c_1 > d_1$ (which cannot arise in the natural context of the paradox) seems objectionable, however. If we exclude it, then I think we also exclude any cases where $B=2,$ since then the only possible values of $a_1/b_1$ and $a_2/b_2$ are $0$ and $1,$ as shown above. There may be other small values of $A,B,C,D$ for which it is impossible to construct the paradox. I do not claim to have fully explored them. The intuition that $A/B$ and $C/D$ must only "differ slightly," however, does not follow through if you are willing to consider large enough values of $A,B,C,D.$ That is, for large enough numbers, it seems possible to construct the paradox even when $A/B$ is much greater than $C/D.$ For example, $$ \frac{1 + 899}{11+989} = \frac{9}{10} > \frac{1}{10} = \frac{90+10}{989+11}, $$ but $$ \frac{1}{11} < \frac{90}{989}, \quad \frac{899}{989} < \frac{10}{11}. $$ So I think the limitation on finding instances of the paradox is associated only with the absolute sizes of $A,B,C,D,$ not with their relative sizes.
If $ p \equiv 1 \pmod{4}$, prove $((\frac{p-1}{2})!)^2 \equiv -1 \pmod {p}$ where p is prime.
If we start from Wilson's theorem, $(p-1)! \equiv -1 \pmod{p}$ for a prime $p$, then we can write $$\begin{align} -1 &\equiv (p-1)! = \prod_{k=1}^{\frac{p-1}{2}} k \cdot \prod_{m = 1}^{\frac{p-1}{2}} (p-m)\\ &\equiv \left(\frac{p-1}{2}\right)! \cdot (-1)^{\frac{p-1}{2}}\prod_{m = 1}^{\frac{p-1}{2}} (m-p)\\ &\equiv (-1)^{\frac{p-1}{2}} \left(\left(\frac{p-1}{2}\right)!\right)^2 \end{align}$$ for odd primes $p$. Now if $p \equiv 1 \pmod{4}$, the factor $(-1)^{\frac{p-1}{2}}$ is $1$, hence $$\left(\left(\frac{p-1}{2}\right)!\right)^2 \equiv -1 \pmod{p}$$ then. It is Wilson's theorem that follows from pairing up each $1 < k < p-1$ with its inverse, leaving $(p-1)! \equiv 1\cdot(p-1) \equiv -1 \pmod{p}$.
A problem about ring isomorphism?
Indeed, $$\ker f_* = \{r+I : r \in R,\, f(r)+I' = I'\} = \{r+I : r \in f^{-1}(I')\} = f^{-1}(I')/I$$ and note that $f_*$ is surjective if and only if for each $r' \in R'$ there exists $r \in R$ such that $r'+I' = f(r)+I'$, that is, if for each $r' \in R'$ there exists $r \in R$ such that $r'-f(r) \in I'$. Thus, we see that $f_*$ is injective iff $I = f^{-1}(I')$; and $f_*$ is surjective iff $R' = \operatorname{im} f + I'$.
$\delta(x,A)\leq \delta(x,B)+\sup\limits_{b\in B} \delta(b,A)$
I'm afraid this is not correct. There's a collision of notation - if the supremum is over all $b \in B$, you cannot use the letter $b$ outside of the supremum. Maybe you meant the following: let $b_0 \in B$ is such that $\delta(b_0,A) = \sup_{b\in B} \delta(b,A)$. Then we obtain a contradiction by taking $x := b_0$. This is also wrong in two ways: such $b_0$ might not exist and more importantly, our proof works only for this particular $x = b_0$, not for arbitrary $x \in X$. Hint. Let $\varepsilon := \sup_{b\in B} \delta(b,A)$. Note that $B \subseteq A^\varepsilon$, then use (iii) to show $\delta(x,A^\varepsilon) \leq \delta(x,B)$ and finally apply (iv).
Prove whether $f(n)$ is $O$, $o$, $\Omega$, $\omega$ or $\Theta$ of $g(n)$ for given $f$ and $g$
Here's a simpler way to do it: $\log n<n$ for sufficiently large $n$, so $$\frac{\log n+n}{n^2}<\frac{2n}{n^2}=\frac2n$$ and $\lim_{n\to\infty}\frac2n=0$. Thus $f(n)=o(g(n))$.
Ratio of Expected values of Boys to Girls
Let the expected number of girls in any given family is $g$. If a boy is born first (probability =$\frac{1}{2}$) then $g=0$, but if a girl is born (probability =$\frac{1}{2}$) then $g=1 + g$ ($1$ for the girl already born to the couple and $g$ for the fact that state is reset to original state where couple have to keep breeding until a boy is born) $\implies$ $g = 0 (\frac{1}{2}) + (g + 1) \frac{1}{2}$ $\implies$ $g = 1$ Similarly, lets say the expected number of boys in any given family is $b$. If a boy is born (probability =$\frac{1}{2}$) then $b=1$, but if a girl is born (probability =$\frac{1}{2}$) then $b=0 + b$ ($0$ because now the newborn is girl and $b$ for the fact that couple are back to where they started but with an extra girl and couple keep breeding) $\implies$ $b = 1 (\frac{1}{2}) + (b + 0) \frac{1}{2}$ $\implies$ $b = 1$ Lets say town has $n$ couples. Therefore, expected number of girls in town is $E[g_1] + E[g_2] + ... +E[g_n]$ But, $E[g_1] = E[g_2] = E[g_n] = g = 1$ $\implies$ Expected number of girls in town $= n$ Similarly, Expected number of boys in town $= n$ This implies ratio of expected values is $n:n = 1:1$
Inequality proof for all $x,y,z \gt 0$
Use Am-Gm $$\frac{xy}{z}+\frac{xz}{y}\geq 2x$$ $$\frac{yz}{x}+\frac{xz}{y}\geq 2z$$ $$\frac{xy}{z}+\frac{yz}{x}\geq 2y$$ Q.E.D
Derivative of an autonomous system of ODEs
If you make the flow definition more specific, such as not re-using the simple $x$ in two different meanings, you could define the flow as $$x(t)=ϕ(t;x_0) ~~ \text{ where } ~~ x(0)=ϕ(0;x_0)=x_0.$$ You need to invoke the autonomous nature of the ODE which has as consequence that $t\mapsto ϕ_t$ is a group action/representation of the additive group $(\Bbb R,+)$. This means that $ϕ_t\circ ϕ_s=ϕ_{t+s}$, or $$ ϕ(t+s;x_0)=ϕ(t;ϕ(s;x_0))\iff x(t+s)=ϕ(t;x(s)). $$ From your point-of-view the closest variant might be $x(t)= ϕ(t-s;x(s))$. Anyway, to get the $x$-derivative of $ϕ$ involved you need first some kind of explicit variability in the $x$-argument of $ϕ$ which is easiest done by moving the initial point along the solution curve. Taking, in the first variant, the derivative for $s$ at $s=0$ then gives indeed by the chain rule $$ f(ϕ(t+s;x_0))=\frac{∂}{∂s}ϕ(t;ϕ(s;x_0))=\frac{∂ϕ}{∂x}(t;ϕ(s;x_0))\cdot f(ϕ(s;x_0))\\ \overset{s=0}\implies f(ϕ(t;x_0))=\frac{∂ϕ}{∂x}(t;x_0)f(x_0)\\ $$
Show $p(A)=B$ or $p(B)=A$
I would divide this into cases. Case 1: One matrix is a multiple of the identity. We can simply use a constant polynomial, then. Case 2: One matrix (WLOG call it $A$) is diagonalizable with two distinct eigenvalues $\lambda_1,\lambda_2$. We can show that since $B$ commutes with $A$, is simultaneously diagonalizable with eigenvalues $\mu_1,\mu_2$. It suffices to find a $p$ such that $p(\lambda_i) = \mu_i$ for $i=1,2$. Case 3: $A$ is not diagonalizable. Without loss of generality (after an appropriate change of basis), suppose that $A$ is in Jordan form, so that $$ A = \pmatrix{\lambda & 1\\0 & \lambda} $$ Note that $B$ commutes with $A - \lambda I$. Conclude that $B$ has the form $$ B = \pmatrix{\mu & t\\0 & \mu} $$ Write $B = \mu I + t(A - \lambda I)$. For the $3 \times 3$ case, consider the diagonal matrices $$ A \pmatrix{1\\&1\\&&0}, \quad B = \pmatrix{0\\&1\\&&1} $$
Closed form of $I = \int_{0}^{+\infty}{t^\kappa e^{-\ \frac{t}{\lambda}}\sin^2{\left(\frac{\pi t}{2\kappa\lambda}\right)}dt}$
I can answer myself following the helpful answers from this post : How to simplify $\left(x+i\pi\right)^{1+x}+\left(x-i\pi\right)^{1+x}$ for $x>0$. $$I=\frac{\Gamma\left(\kappa+1\right)\ \lambda^{\kappa+1}}{4}\left(2-\left(\frac{\kappa}{\kappa-i\pi}\right)^{1+\kappa}-\left(\frac{\kappa}{\kappa+i\pi}\right)^{1+\kappa}\right)$$ $$I=\frac{\Gamma\left(\kappa+1\right)\ \lambda^{\kappa+1}}{4}\left(2-\kappa^{1+\kappa}\left(\frac{1}{\left(\kappa-i\pi\right)^{1+\kappa}}+\frac{1}{\left(\kappa+i\pi\right)^{1+\kappa}}\right)\right)$$ $$I=\frac{\Gamma\left(\kappa+1\right)\ \lambda^{\kappa+1}}{4}\left(2-\left(\frac{\kappa}{\kappa^2+\pi^2}\right)^{\kappa+1}\left(\left(\kappa+i\pi\right)^{1+\kappa}+\left(\kappa-i\pi\right)^{1+\kappa}\right)\right)$$ $$I=\frac{\Gamma\left(\kappa+1\right)\ \lambda^{\kappa+1}}{2}\left(1-\left(\frac{\kappa}{\sqrt{\kappa^2+\pi^2}} \right)^{\kappa+1}\cos{\left(\left(1+\kappa\right)\arctan{\frac{\pi}{\kappa}}\right)}\right)$$
Sum of the series $\sum\limits_{n=1}^\infty \frac{x^{2n}}{n\cdot2^n}$
(2018-11-25) "Amusing" revenge downvote, four years later. You know that $$\sum_{n\geqslant1}\frac1nt^n=-\log(1-t) $$ and you are asking about $$\sum_{n\geqslant1}\frac{x^{2n}}{n2^n}=\sum_{n\geqslant1}\frac1n\left(\frac{x^2}2\right)^n=-\log\left(1-\ldots\right). $$
Trick to solving this Summation?
Let's look at $$\sum_{i=0}^k r^i=\frac{1-r^{k+1}}{1-r}$$ which is the finite geometric series. If we now take the derivative of both sides with respect to $r$, we find $$\frac{d}{dr}\sum_{i=0}^k r^i=\sum_{i=0}^k i*r^{i-1}=$$ $$\frac{d}{dr}\frac{1-r^{k+1}}{1-r}=-(k+1)\frac{r^k}{1-r}+\frac{1-r^{k+1}}{(1-r)^2}$$ Now multiply both sides by $r$ and let $r=2$: $$\sum_{i=0}^k i*2^{i}=-2(k+1)2^k/(-1)+2(1-2^{k+1})/(-1)^2=(k+1)2^{k+1}+2-2*2^{k+1}$$ $$=(k-1)2^{k+1}+2$$ which is the true formula (I think you may have missed a bracket in yours).
Stone's Representation Theorem and The Compactness Theorem
Let $\textbf{Bool}$ be the category of boolean algebras, and let $\textbf{Stone}$ be the category of Stone spaces (however you define it). Suppose $F : \textbf{Bool}^\textrm{op} \to \textbf{Stone}$ is a weak equivalence of categories (i.e. fully faithful and essentially surjective on objects); we will deduce the boolean prime ideal theorem. First, note that $\textbf{Bool}$ has an initial object, namely the boolean algebra $2 = \{ 0, 1 \} = \mathscr{P}(1)$, and it has a terminal object, namely the boolean algebra $1 = \{ 0 \} = \mathscr{P}(\emptyset)$. Since $F$ is a weak equivalence, $F(2)$ must be a terminal object (so a one-point space) and $F(1)$ must be an initial object (the empty space). [It is very easy to check these preservation properties even in the absence of a quasi-inverse for $F$.] Let $B$ be a boolean algebra. Clearly, prime ideals of $B$ correspond to boolean algebra homomorphisms $B \to 2$, and hence, to continuous maps $F(2) \to F(B)$, which are the same thing as points of $F (B)$. But if $F (B)$ is empty, then the canonical map $F(1) = \emptyset \to F(B)$ is a homeomorphism, and so the canonical homomorphism $B \to 1$ is an isomorphism. [Again, this is easy to check even in the absence of a quasi-inverse for $F$.] Thus, $B$ has a prime ideal if and only if it is non-trivial.
To find inverse of $f$ ( using calculus)
If $y \ge 0$, $f^{-1}(y) = +\sqrt{y}$, not $-\sqrt{y}$, because you want $x \ge 0$. Similarly, if $y < 0$, $f^{-1}(y) = -\sqrt{-y}$.
What is a decidable fragment?
It is a "subset" of first-order logic that is decidable. Another example is Monadic predicate calculus : in which all relation symbols are monadic (that is, they take only one argument), and there are no function symbols. This means that all atomic formulas are thus of the form $P(x)$. In the case of a logical system, decidability means that : there is an effective method for determining whether arbitrary formulas are theorems of the logical system. For example, propositional logic is decidable, because the truth-table method can be used to determine whether an arbitrary propositional formula is logically valid.
Incorrect computation of Euler Sum using Complex Residues
When $f$ has a simple pole - we assume that the residue is $1$ for simplicity of notation, the general case follows by multiplication with a constant - at $p$, write $$f(s) = \frac{1}{s-p} + g(s)$$ with $g$ holomorphic on a neighbourhood of $p$. Then by the binomial theorem $$f(s)^m = \sum_{k = 0}^{m} \binom{m}{k} \frac{g(s)^k}{(s-p)^{m-k}}\,,\tag{$\ast$}$$ and to compute the residue of $f(s)^m r(s)$ for a holomorphic $r$ with a zero of order $0 \leqslant\mu < m$ at $p$, we need the terms for $k \leqslant m - \mu - 1$ on the right hand side of $(\ast)$, since all these may contain a term $\frac{c}{s-p}$ with $c\neq 0$ in their Laurent expansion about $p$. Writing $r(s) = (s-p)^{\mu} u(s)$, we obtain $$f(s)^m\cdot r(s) = \sum_{k = 0}^{m - \mu - 1} \binom{m}{k} \frac{g(s)^k u(s)}{(s-p)^{m-\mu-k}} + \sum_{k = m-\mu}^{m} \binom{m}{k}(s-p)^{k+\mu-m}g(s)^ku(s)$$ where the second sum is holomorphic at $p$, and hence \begin{align} \operatorname{Res} \bigl(f^m(s) r(s)\bigr)_{s = p} &= \sum_{k = 0}^{m - \mu - 1} \binom{m}{k} \operatorname{Res}\biggl(\frac{g(s)^ku(s)}{(s-p)^{m-\mu-k}}\biggr)_{s = p} \\ &= \sum_{k = 0}^{m - \mu - 1} \binom{m}{k} \frac{1}{(m-\mu-k-1)!} \biggl(\frac{d}{ds}\biggr)^{m-\mu-k-1}\bigl(g(s)^ku(s)\bigr)\biggr\rvert_{s = p}\,. \end{align} For $m = 3$ and $\mu = 0$ that is $$\operatorname{Res}\bigl(f(s)^3r(s)\bigr)_{s = p} = \frac{1}{2}r''(p) + 3\bigl(g'(p)r(p) + g(p)r'(p)\bigr) + 3g(p)^2r(p)\,.$$
Finding a critic point using lagrange multipliers
From $\nabla f=\lambda\nabla g$ you obtain a system of $n$ equations with high symmetry. The first equation is $$x_2x_3\ldots x_n=\lambda\ .$$ Multiply this equation with $x_1$, and obtain $$p:=\prod_ix_i=\lambda x_1\ .$$ In the same way you obtain $n$ equations $$p=\lambda x_j\qquad(1\leq j\leq n)\ .$$ This allows to conclude that a conditionally critical point with all $x_i\ne0$ has $x_1=x_2=\ldots=x_n$, and therefore $x_i=1$ $\>(1\leq i\leq n)$.
Set Theory: Proving Statements About Sets
Let $A=\{2,3\}$, $B=\{1,3\}$, and $C=\{1,2\}$. The intersection of the three sets is empty. But none of them is a subset of the complement of another. By symmetry, it is sufficient for example to show that $A$ is not a subset of the complement $B^c$ of $B$. Note that $B^c$ consists of all integers except $1$ and $3$. Since $A$ does contain $3$, $A$ is not a subset of $B^c$. Comment: As has been pointed out in a comment by @Joe, the empty set is a subset of every set, so setting $A=\emptyset$ cannot give you a counterexample, whatever be the choice of $B$ and $C$.
Calculating the limit of the sum.
Hint : $$\rm L = \int_{0}^1 \frac{1}{1+x} \rm dx$$
Computing the integral of a solution of the wave equation over the whole space
We have $$ \frac{d^2}{dt^2} \int_V u(x,t) \, dx = \int_V u_{tt} \, dx = \int_V \Delta u \, dx = \int_{\partial V} \nabla u \cdot dS, $$ using differentiation under the integral sign and the Divergence Theorem. If we take $V=\mathbb{R}^n$, the last term vanishes because at finite time the solution to the wave equation with compact initial conditions is compact (information propagates along characteristics, which are finite-speed curves for the wave equation). Hence we must have $$ \int_{\mathbb{R}^n} u(x,t) \, dx = At+B. $$ Setting $t=0$ gives $B = \int_{\mathbb{R}^n} u(x,0) \, dx = \int_{\mathbb{R}^n} f(x) \, dx $, and $A$ is obtained similarly.
Product of $L^p$-convergent sequences are Cauchy
Being that $L_p$ is a complete space, there is $f\in L_p$ such that $\|f_n-f\|_p\xrightarrow{n\rightarrow\infty}0$. $$ |f_nh_n -fh|\leq |(f_n-f)h_n| + |fh_n-fh|\leq |f_n-f|M+|f||h_n-h| $$ As $|f||h_n-h|\leq 2M|f|$ and $|f||h_n-h|\xrightarrow{n\rightarrow\infty}0$ a.s., $\|f(h-h_n)\|_p\rightarrow0$ by dominated convergence. Consequently $f_nh_n$ converges to $fh$ in $L_p$ and so, $\{f_nh_n\}$ is a Cauchy sequence.
Topological complement
(i) If $S\circ T=1_E$, then $A:=\ker S \le F$, since $S$ is continuous, $\{0\}$ is closed, $A=S^{-1}(\{0\})$ is closed. The closedness of $R(T)$ can be proved like this: let $f_n=T(e_n)$ be a sequence in $R(T)$ with limitpoint $f\in F$. We want to show $f\in R(T)$. Then $S(f_n)=S(T(e_n))=e_n$, and $S$ is continuous, so $e_n\to S(f)$ in $E$. But then, $T(e_n)\to f$ (by def. of $f$), and by continuity of $T$, we also have $T(e_n)\to T(S(f))$. By the uniqueness of limit, $f=T(S(f))\in R(T)$. For the rest, try to conclude that $\ker S\cap R(T)=\{0\}$, and all vectors $f\in F$ can be written as $f=T(S(f))+(f-T(S(f)) \in R(T)+\ker S$. (ii) If $F=A\oplus R(T)$ for a closed subspace $A$, then consider $$S(a+x):=T^{-1}(x) \ \text{ for } a\in A,\,x\in R(T).$$
Proving global exponential stability of a perturbed system
Since $P$ and $Q$ are both positive definite, for $V(x) = x^TPx$ we have $$\lambda_{min}(Q)\Vert x\Vert^2\le x^TQx\le \lambda_{max}(Q)\Vert x\Vert^2\tag{1}$$ $$\lambda_{min}(P)\Vert x\Vert^2\le V(x)\le \lambda_{max}(P)\Vert x\Vert^2\tag{2}$$ From $(1)$: $$-x^TQx\le-\lambda_{min}(Q)\Vert x\Vert^2$$ and from $(2)$: $$\frac1{\lambda_{max}(P)}V(x)\le \Vert x\Vert^2$$ This yields $$\dot{V}(x) \le -\varepsilon x^TQx\le-\varepsilon \frac{\lambda_{min}(Q)}{\lambda_{max}(P)}V(x)$$ Hence the origin of this system is globally exponentially stable.
The proof of finding extreme points of the unit ball of $l^1$
For example, take $$ e^{(1)} = (1,0,0, \cdots ) $$ Clearly, $e^{(1)}\in B$. Now suppose that for $t \in (0,1)$, you have $b=\{ b_j \}_j , d=\{ d_j \}_j$ with $b,d \in B$ such that $$ e^{(1)} = t b+ ( 1- t )d $$ Then, you must have $1=tb_1+(1-t)d_1$ and $0=tb_j+(1-t)d_j$ for $j>1$ ,which gives that $$ b_j=d_j=\left\{ \begin{array}{cc} 1 & \text{if } j=1 \\ 0 & \text{if } j>1 \end{array} \right. $$ Thus $b=d=e^{(1)}$, proving that $e^{(1)}$ is an extreme point of $B$. Can you take it form here? You basically only need to answer if there are any more extreme points than the $$ e^{(n)}=(\underbrace{0, \cdots, 0}_{n-1},1, 0, \cdots) $$
Finding all composites in given limits
Things are much easier if you are counting valid pairs of $a$ and $n$ rather than counting the actual values. (I.e. I'm assuming that you don't want to count duplicates even if the $a$ and $n$ are different). The strategy is still basically the same. If you don't want to include duplicates, then it suffices to just use $n$ prime and then use inclusion exclusion principle to take care of duplicates. For a given $n$, say $n = 2$, if you compute the $n$th roots of your upper and lower bound then this will tell you how many squares you have in your range, without having to list them all. Then do $n = 3$, by taking cube roots of your upper and lower bound. If your range is not too large you won't have to do that many primes $n$. Then you need to subtract counts for all $n$ which are products of two of your primes, and then add counts for all $n$ that are products of three of your primes, and so forth (This is the inclusion-exclusion). If your range is numbers less than $2^k$ then the largest prime $n$ you'll need to consider is less than or equal to $k$. Also, you will probably only have to apply a few levels of inclusion-exclusion before your $n$ is too large to make $a^n$ fit in your range.
Is platonism assumed when writing down formal theories?
Platonism doesn't have to be assumed when writing down formal theories. I'm going to give formalist answers to this however other philosophies of mathematics have there own answers. In formalism mathematics is considered to be a system of symbols that are manipulated according to rules. However a key aspect of such systems of symbols is that you can define a new symbol systems that is isomorphic to the previous one. So the system of arithmetic using Hindu-Arabic numbers is equivalent to the system that uses the Peano axioms. However the Hindu-Arabic numbers are more compact. Some of these isomorphism are so compact that a finite string manipulation rule in one system is equivalent to an infinite number of string manipulations in another. So while we say informally that there are infinite axioms created by the first order axiom schema of induction; formally what is said is that there is an axiom schema which has certain text manipulation behaviours. Formally we never deal with an infinite strings, rather we deal with finite strings that are isomorphic to infinite string systems.
Differentiability in normed spaces
(2.) If $g$ is injective then $g'$ is injective. Notice $g(x_1)=g(x_2)$ implies $(x_1,f(x_1)) = (x_2,f(x_2))$ hence $x_1=x_2$ which shows $g$ is injective. (my notation below may be non-standard, but, I believe when you clean-up the solution it's something like what follows:) (3.) Suppose $s(x,z)=z-f(x)$. Observe $g(x) = (x,f(x))$ implies $g'(x) = Id \times f'(x)$. Then $s'(x,z) = -f'(x) \times Id$. But, $s'(g'(x)) = f'(x)-f'(x)=0$ thus $g'(x)$ is in the kernel of $s'$.
Can I apply formal definition to $\lim\limits_{x\to1} \left[2x+1\right]$?
You can already guess from your function that it will converge to $3$, since it is defined at $1$ and $f(1) = 3$. So if you want to be formal you can do Given $\epsilon > 0$ choose $\delta = \frac{\epsilon}{2}$. Then, given any $x$ such that $|x - 1| < \delta$ we know that $|f(x) - 3| = |2x + 1 - 3| = 2|x - 1| < 2\delta < \epsilon$ And you're done.
Proving the concurrency of angle bisectors in a triangle analytically
Without loss of generality, we can set: $A=(0,0)$, $B=(1,0)$ and $C=(a,b)$, with $b>0$. By the tangent bisection formula one gets then the slopes $m$ and $m'$ of the angle bisectors through $A$ and $B$: $$ m=\sqrt{{1-a/\sqrt{a^2+b^2}\over1+a/\sqrt{a^2+b^2}}}, \quad m'=-\sqrt{{1-(1-a)/\sqrt{(1-a)^2+b^2}\over1+(1-a)/\sqrt{(1-a)^2+b^2}}}. $$ You can now find the equations of the two lines and their intersection $G$. It remains to show that $CG$ is the bisector of angle $C$. I think the simplest way to do that is finding the intersection $H$ between line $CG$ and the $x$ axis, and checking that $AH/BH=AC/BC$.
Find a and b such that ? ..
$$\begin{align} & \left\{ \begin{matrix} a+2b=-5 \\ -a+2b=-11 \\ \end{matrix}\,\,\,\,\Rightarrow \right.\,\,a=3\,\,\,,\,\,\,b=-4 \\ & \left \{ \,\,\,\begin{matrix} a+2b=-5 \\ 2a-2b=14 \\ \end{matrix}\,\,\,\,\,\,\,\,\,\,\Rightarrow \right.\,\,a=3\,\,\,,\,\,\,b=-4 \\ \end{align}$$
without computing it, show that $I = \mathbb{E}[e^{XY} |X] \geq 1$
Since $Y$ is standard normal, $E(e^{xY})=e^{x^2/2}$ for every $x$, hence, by independence of $X$ and $Y$, $E(e^{XY}\mid X)=e^{X^2/2}$ almost surely. Thus, $E(e^{XY}\mid X)\geqslant1$ almost surely. The proof does not use the distribution of $X$, only the distribution of $Y$ and the independence of $X$ and $Y$. Your approach does not yield a contradiction: you are asked to show that some random variable $Z$ is such that $Z\geqslant1$ almost surely. The negation of this property is not that $Z<1$ almost surely, but that $P(Z<1)\ne0$. And the hypothesis that $P(Z<1)\ne0$ does not imply that $E(Z)<1$. Edit: If one is forbidden to compute $E(e^{XY}\mid X)$, one can note that, by Jensen convexity inequality, for every $x$, $E(e^{xY})\geqslant e^{xE(Y)}=1$, hence, again by independence, $E(e^{XY}\mid X)\geqslant1$ almost surely. This uses only that $E(Y)=0$ and the independence of $X$ and $Y$.
Analytic function on $D$ such that $| f ( z ) | = | \sin z | , \forall z \in$ $D ,$ then prove that $f(z)=c\sin z$ for some constant c
Hint If $\sin(a)=0$ for some $a \in D$ deduce that $f(a)=0$ in $D$ and hence there exists an alaytic function $h(z)$ such that $f(z)=(z-a)h(z)$. Use this to show that $\frac{f(z)}{\sin(z)}$ has a removable singularity at $z=a$.
Proof of implication in Product Space
I thought of this in terms of its contrapositive. Suppose $\bar{A}\times\bar{B}$ is not a subset of $\overline{A\times B}$. There exists some $(x_0,y_0)\in \bar{A}\times\bar{B} - \overline{A\times B}$. Now show that the contrapositive of the hypothesis is false. The contrapositive of the hypothesis is, "if $x\in\overline{A}$ and $y\in\overline{B}$, then $(x,y)\in\overline{A\times B}$." However, because $(x_0,y_0)\notin \overline{A\times B}$, having $x\in \overline{A}$ and $y\in\overline{B}$ does not imply that $(x,y)\in\overline{A\times B}$. To be a little more precise, make the following assignments: $$P(x,y) = (x,y)\notin\overline{A\times B}$$ $$Q(x,y) = x\notin A\mbox{ or }y\notin B$$ $$R = \overline{A}\times\overline{B}\subseteq\overline{A\times B}$$ The proposition you want to prove is: $$\forall(x,y)\big( P(x,y)\Rightarrow Q(x,y)\big) \Rightarrow R.$$ The first implication, $P\Rightarrow Q$, is equivalent to its contrapositive $$\forall(x,y)\big( \neg Q(x,y)\Rightarrow \neg P(x,y)\big).$$ I suggest proving the contrapositive of your proposition: $$\neg R\Rightarrow \neg \forall(x,y)\big( \neg Q(x,y)\Rightarrow\neg P(x,y)\big)$$ i.e. $$\neg R\Rightarrow \exists(x,y)\neg \big( \neg Q(x,y)\Rightarrow\neg P(x,y)\big).$$
Autocorrelation function of MA(1)
Out $\varepsilon$'s should be iid and follow $$ \varepsilon_t\sim\mathcal{N}(0,\delta^2) $$ (I am assuming giving the form) $$ \mathbb{E}[(\varepsilon_t+\beta\varepsilon_{t-1})^2] = \mathbb{E}[\varepsilon_t^2+2\beta\varepsilon_t\varepsilon_{t-1} + \beta^2\varepsilon_{t-1}^2] = \mathbb{E}[\varepsilon_t^2] + 2\beta\mathbb{E}[\varepsilon_t\varepsilon_{t-1}] + \beta^2\mathbb{E}[\varepsilon_{t-1}^2] $$ In theory you should have $$ \mathbb{V}[\varepsilon_t] = \mathbb{E}[\varepsilon_t^2] - \mathbb{E}[\varepsilon_t] = \mathbb{E}[\varepsilon_t^2] = \delta^2 $$ and since they should be independent $$ \mathbb{E}[\varepsilon_t\varepsilon_{t-1}] = \mathbb{E}[\varepsilon_t]\mathbb{E}[\varepsilon_{t-1}] = 0 $$ so $$ \mathbb{E}[(\varepsilon_t+\beta\varepsilon_{t-1})^2] = \delta^2 + 0 + \beta^2\delta^2 = (1+\beta^2)\delta^2 $$
Direct product versus direct sum
I think the author mean that we define an addition on $M\oplus N$, but not on $M\times N$. Moreover we consider $M\times N$ as a generating set for formal sums $(a,b)+(c,d)$. As to second question I think this expression is not nice and explains nothing.
What are good examples of polar sets in $\mathbb R^2$?
A few examples include: The polar set of a singleton $\{x\}$ is the halfspace. If $\|x\|$ then the halfspace touches $x$, otherwise its below $x$ iff $\|x\|>1$: $\hspace{40pt}$ The polar set of a unit disk is the disk itself. See this question for the polar set of $\{(x,y)\in\mathbb R^2: x^2+y^4\le1\}$. Consider the polytope with the four vertices $(\pm 1,\pm1)$ (a square centred at the origin). Then its polar set is a rotated square with vertices $(0,\pm1),(\pm1,0)$: $\hspace{40pt}$ You can find more examples in these lecture notes by Jonathan Kelner (Link to pdf). If you have access to Mathematica, I made a simple snippet to explore how the polar sets of finite numbers of points look like. Just click anywhere to add points and see the corresponding polar set: $\hspace{40pt}$ Here's the code to generate this visualisation: DynamicModule[{pts={{1,1}}}, EventHandler[ Dynamic@Show[ Graphics[{ PointSize@0.04,Dynamic@Point@pts, Circle[] }, Axes->True,AxesOrigin->{0,0},Frame->False,PlotRange->ConstantArray[{-2,2},2], GridLines->Automatic,AxesStyle->Directive[Large,Black] ], RegionPlot[ And@@(Dot[#,{x,y}]<=1&/@pts), {x,-4,4},{y,-4,4},PlotPoints->50 ] ], {"MouseClicked":>(AppendTo[pts,MousePosition["Graphics"]])} ] ] See source of this answer for the MMA code used to generate the figures.
For $ A,B \in M_n(\mathbb C) $, if $A^*AB=0$ then $AB=0$
From $A^*AB=0$, multiplying on the left by $B^*$ you get $$ 0=B^*A^*AB=(AB)^*AB. $$ So now the proof is reduced to showing that if $X^*X=0$, then $X=0$. With a similar idea as above, for any $v\in \mathbb C^n$ we have $$ 0=v^*X^*Xv=(Xv)^*Xv. $$ This implies $Xv=0$, and as this can be done in particular for $v$ in the canonical basis, we get $X=0$.
I need prove a boolean function
If + means or and ! means not, and juxtapose means and (like ab for a and b) then DeMorgans rules become !(a+b)=(!a)(!b) and $!(ab)=(!a)+(!b).$ Then $!(!xy +x!y)=(!(!x)y)(!(x!y))$ Doing DeMorgans inside both (and using $!!a=a$) then makes it $(x+!y)(!x+y).$ Distributing then gives four terms, two of which ($x!x$ and $!yy$) are zero and only $xy+!x!y$ remains.
Minimum of sum using AM-GM
Let $x=y+a$. Hence, by AM-GM $$x+\frac{8}{y(x-y)}=a+y+\frac{8}{ay}\geq a+\frac{4\sqrt2}{\sqrt{a}}=$$ $$=a+\frac{2\sqrt2}{\sqrt{a}}+\frac{2\sqrt2}{\sqrt{a}}\geq3\sqrt[3]{a\left(\frac{2\sqrt2}{\sqrt{a}}\right)^2}=6.$$ The equality occurs for $a=2$, $x=4$ and $y=2$, which says that $6$ is a minimal value.
Closed form of :$\int_{0}^{\pi}(\frac{\sin n x}{\sin x})^m$, with $n,m$ are integers?
For $m,n \in \mathbb{N}$ let $$ I(m,n) = \int \limits_0^\pi \left(\frac{\sin(n x)}{\sin(x)}\right)^m \, \mathrm{d} x \, . $$ If $m$ is odd and $n$ is even, the substitution $x = \pi - y$ yields $$ I(m,n) = \int \limits_0^\pi \left(\frac{(-1)^{n+1} \sin(n y)}{(-1)^2 \sin(y)}\right)^m \, \mathrm{d} y = (-1)^m I(m,n) = - I(m,n) \, ,$$ so $I(m,n) = 0$ . In any other case $(n-1)m$ is even and the integral does not vanish. Then we can use the partial sums of the geometric series to find \begin{align} I(m,n) &= \int \limits_0^\pi \left(\frac{\mathrm{e}^{\mathrm{i} n x} - \mathrm{e}^{-\mathrm{i} n x}}{\mathrm{e}^{\mathrm{i} x} - \mathrm{e}^{-\mathrm{i} x}}\right)^m \, \mathrm{d} x = \int \limits_0^\pi \mathrm{e}^{\mathrm{i} (n-1) m x}\left(\frac{1 - \mathrm{e}^{-2 \mathrm{i} n x}}{1 - \mathrm{e}^{-2\mathrm{i} x}}\right)^m \, \mathrm{d} x \\ &= \int \limits_0^\pi \mathrm{e}^{\mathrm{i} (n-1) m x}\left(\sum \limits_{k=0}^{n-1} \mathrm{e}^{-2\mathrm{i} k x}\right)^m \, \mathrm{d} x \\ &= \sum \limits_{k_1=0}^{n-1} \cdots \sum \limits_{k_m=0}^{n-1} \, \int \limits_0^\pi \exp\left[2 \mathrm{i}\left(\frac{(n-1)m}{2} - \sum \limits_{i=1}^m k_i\right) x\right] \, \mathrm{d} x \, . \end{align} Now note that the remaining integral is non-zero (and equal to $\pi$) if and only if $$ \frac{(n-1)m}{2} - \sum \limits_{i=1}^m k_i = 0 \, \Longleftrightarrow \, \sum \limits_{i=1}^m k_i = \frac{(n-1)m}{2}$$ holds. Therefore we have $$ I(m,n) = N\left(\frac{(n-1)m}{2} , m , n-1\right) \pi \, ,$$ where $N\left(\frac{(n-1)m}{2} , m , n-1\right)$ is the number of $m$-tuples $(k_1 , \,\dots \, , k_m) \in \{0 \, \dots \, , n-1\}$ whose elements sum to $\frac{(n-1)m}{2}$ . $N(b,c,d)$ can also be interpreted as the number of ways to fill $c$ containers with $b$ balls such that there are at most $d$ balls in each container. These numbers are discussed in this question.
Real Analysis: Integrable
Apply induction. Change the value at one point at a time till you go from one function to the other.