title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
A question about Dirichlet Test for converge $- f(x)=\sin x/x^{\alpha}$
Dirichlet Test Let $f$, $g :[a,\infty)\rightarrow\Bbb R $ be such that: 1) $f$ is decreasing and $f(x)\rightarrow0$ as $x\rightarrow\infty$. 2) $g$ is continuous and there is an $M$ such that $\Bigl|\int_a^x g(t)\,dt\Bigr|\le M$ for all $x>a$. Then $\int_a^\infty f(t)g(t)\,dt$ converges. Now, is $f(x)={1\over x^\alpha}$ defined on all of $[0,\infty)$? Could you even define an extension of $f$ to $[0,\infty)$ so that it remains monotone?
Finding radius of convergence without cauchy ratio test
There is a theorem which answers this: (I summarize from the book Elementary Differential Equations and Boundary Value Problems) Consider the second-order ode $y'' + p(t)y' + q(t) y = 0$. If $p(t)$ and $q(t)$ are analytic at a point $t_0$ (that is, the Taylor series expansions of $p$ and $q$ around $t_0$ converges to $p(t)$ and $q(t)$, respectively, in some interval around $t_0$) , then the general solution is given by \begin{equation} y(t) = \sum_{n = 0}^{\infty}a_n(t - t_0)^n = a_0y_1(t) + a_1 y_2(t) \end{equation} where $a_0$ and $a_1$ are arbitrary constants, and $y_1$, $y_2$ form a fundamental set of solutions. Furthermore, the radius of convergence for each of the series $y_1$ and $y_2$ is at least as large as the minimum of the radii of convergence of the series for $p$ and $q$. In your example above, $p(t) = -2t$ and $q(t) = \lambda$ are polynomials, so analytic at every $t_0$ with infinite radius of convergence. Hence the solution $y(t)$ has infinite radius of convergence, i.e., converges everywhere.
Why can't the hyperplane H intersected with polyhedral set S contain any line...
Any line in $\mathbb R^2$ can be written as $$L = \left\{ \left(\begin{matrix}x \\ y\end{matrix}\right) + t \left(\begin{matrix}u \\ v\end{matrix}\right) : t \in \mathbb R \right\}$$ where $u$ and $v$ are not both zero. For some choice of $t$, either $x+tu < 0$ or $y + tv < 0$. So $L$ contains a point $\mathbf x$ that does not satisfy $\mathbf x \geq 0$. So $L$ cannot be contained in $S$ (and hence not in $S\cap H$, no matter what $H$ is).
Prove that the value of $\Delta$ is an integer for the given determinant
From here $$\sqrt 6\begin {vmatrix} 1&2i&3+\sqrt 6 \\\ 0&\sqrt 3&\sqrt 6i-2\sqrt 3 \\\ 0&\sqrt 2&2i-3\sqrt 2 \end {vmatrix}$$ Pull out a factor of $\sqrt{3}$ from the second row and a factor of $\sqrt{2}$ from the third to get $$6\begin {vmatrix} 1&2i&3+\sqrt 6 \\\ 0& 1 &\sqrt 2i-2 \\\ 0& 1 & \sqrt{2}i-3 \end {vmatrix}$$ Now you can compute it easily by expanding along the first row or by subtracting the second row from the third. Lets subtract. $$6\begin {vmatrix} 1&2i&3+\sqrt 6 \\\ 0& 1 &\sqrt 2i-2 \\\ 0& 1 & \sqrt{2}i-3 \end {vmatrix} = 6\begin {vmatrix} 1&2i&3+\sqrt 6 \\\ 0& 1 &\sqrt 2i-2 \\\ 0& 0 & -1 \end {vmatrix}$$ The determinant is just the product of the diagonal entries, so $6 \cdot 1 \cdot 1 \cdot -1 = -6$.
How many turns, on average, does it take for a perfect player to win Concentration (adjusted)?
It is possible that this solution is just a reformulation of @RossMillikan. Some notation: We denote by $C(n,k)$ a slightly modified game of concentration with $2n+k$ cards and $n$ couples. Consider a game of $n$-concentration (i.e. $n$ couples). The strategy we use is the following: The first move is to flip two cards. If they match, we are in the situation of a game of $(n-1)$-concentration and we repeat this process until either we have finished the game, or we find two cards that don't match. In this case, not considering the two cards just flipped, we are in the situation of $C(n-2,2)$. We are in the situation $C(n,k)$, where the $k$ singletons are known cards (those of which we have already flipped the companion). Flip an as of yet never flipped card. If it is one of the $k$ singletons, pair it with its partner and we get to $C(n,k-1)$, else flip a second card and we have three possibilities. Either we find a couple with the first card we flipped, so that we find ourselves in $C(n-1,k)$, or the second card matches with one of the $k$ singletons, so that the next move is to eliminate the found couple and we are in $C(n-1,k)$, or we find two new singletons, landing in $C(n-2,k+2)$. Go on like this until the game is done. Now for the expected number of turns. Abusing of notation, I write $C(n,k)$ for the expected value of a game $C(n,k)$. We have the obvious formulae \begin{align} C(0,k) = & k,\\ C(1,0) = & 1,\\ C(n,0) = & 1 + \frac{1}{2n-1}C(n-1,0) + \left(1-\frac{1}{2n-1}\right)C(n-2,2),\\ C(n,k) = & 1 + \left(1-\frac{k}{2n+k}\right)\left(\frac{1}{2n+k-1}C(n-1,k) + \frac{k}{2n+k-1}(1+C(n-1,k)) + \frac{2n-2}{2n+k-1}C(n-2,k+2)\right) + \frac{k}{2n+k}C(n,k-1). \end{align} This allows to dynamically compute the expected number of moves. I believe this strategy to be optimal, but I have no idea how to prove it. I don't know if there is a closed formula for $C(n,0)$. Computationally, for low values of $n$ I got: \begin{align} C(1,0) = & 1\\ C(2,0) = & \tfrac{8}{3},\\ C(3,0) = & \tfrac{13}{3},\\ C(4,0) = & \tfrac{622}{105},\\ C(5,0) = & \tfrac{793}{105}. \end{align} If anyone wants more data in order to conjecture a closed expression for $C(n,0)$ I will be happy to provide it.
Question on domain and ranges.
The range of $f\circ g$ is $\Bbb R\setminus\{-4\}$. The number $0$ belongs to the range, since$$f\left(g\left(\frac{85}{24}\right)\right)=0.$$
Gershgorin’s Circle Theorem
Yes. If you study the proof, you will find that the eigenvalues need not be distinct to reach the theorem's conclusion. In fact, your statement of the theorem's hypothesis is overly restrictive: the eigenvalues may be repeated or complex.
Solving $\int\limits_{-\infty}^\infty \frac{1}{x^8+1}dx$ through Glasser's Master Theorem
I don't know what is GMT. I looked it up on Wikipedia and it states $u = x - 1/x$ as Cauchy–Schlömilch substitution. Here is my solution that splits the integral into 4 integrals, each one of them uses $u = x - 1/x$. I will happily delete the answer if it is useless or wrong :) $$\int^{\infty}_{-\infty} \dfrac{1}{1+ x^8} dx = -\dfrac{1}{2 \sqrt{2}}\int^{\infty}_{-\infty} \dfrac{x^2 - \sqrt{2}}{x^4 - \sqrt 2 x^2 + 1} + \dfrac{1}{2 \sqrt{2}}\int^{\infty}_{-\infty}\dfrac{x^2 + \sqrt{2}}{x^4 + \sqrt2 x^2 + 1} \\= -\dfrac{1}{2 \sqrt{2}}\int^{\infty}_{-\infty} \dfrac{x^2 + 1}{x^4 - \sqrt 2 x^2 + 1} +\dfrac{\sqrt{2} +1}{2 \sqrt{2}} \int^{\infty}_{-\infty} \dfrac{1}{x^4 - \sqrt 2 x^2 + 1} \\+ \dfrac{1}{2 \sqrt{2}}\int^{\infty}_{-\infty}\dfrac{x^2 + 1 }{x^4 + \sqrt2 x^2 + 1}+\dfrac{\sqrt2 -1}{2 \sqrt{2}}\int^{\infty}_{-\infty}\dfrac{1}{x^4 + \sqrt2 x^2 + 1} $$ $$\int^{\infty}_{-\infty} \dfrac{x^2 + 1}{x^4 - \sqrt 2 x^2 + 1} = \int^{\infty}_{-\infty} \dfrac{1 + 1/x^2}{ (x -1/x)^2 + 2 - \sqrt 2}$$ $u = x - 1/x, du = 1 + 1/x^2$ $$\int^{\infty}_{-\infty} \dfrac{1}{u^2 + (2 - \sqrt 2 )} du $$ $$J = \int^{\infty}_{-\infty} \dfrac{1}{x^4 - \sqrt 2 x^2 + 1} = 2 \int^{\infty}_{0} \dfrac{1}{x^4 - \sqrt 2 x^2 + 1}$$ $x = 1/t, dx = -1/t^2 dt$ $$J = 2 \int^{\infty}_{0} \dfrac{t^2}{t^4 - \sqrt 2 t^2 + 1} dt$$ Adding the two equations, $$J = \int^{\infty}_{0} \dfrac{t^2 + 1}{t^4 - \sqrt 2 t^2 + 1} dt$$ This can be solved with $ u = t - 1/t$ like the previous equation. $$\int^{\infty}_{-\infty}\dfrac{x^2 + 1 }{x^4 + \sqrt2 x^2 + 1} = \int^{\infty}_{-\infty}\dfrac{1 + 1/x^2 }{(x - 1/x)^2 + \sqrt2 + 2}$$ Similar to first integral, now. $$I = \int^{\infty}_{-\infty}\dfrac{1}{x^4 + \sqrt2 x^2 + 1} = 2\int^{\infty}_{0}\dfrac{1}{x^4 + \sqrt2 x^2 + 1}$$ $x = 1/t, dx = -1/t^2 dt$ $$ I = \int^{\infty}_{0}\dfrac{t^2 + 1}{t^4 + \sqrt2 t^2 + 1} dt $$ Can be solved with $u = t - 1/t$ like other integrals here.
Uniform convergence of series: $\sum_{n=1}^{\infty}{2^n\sin\left(\frac{x}{3^n}\right)}$
$$|\sin{\frac{x}{3^n}}| \leq \frac{|x|}{3^n}$$ so the series converges pointwise on the real line. The series also converges uniformly in every bounded subset of $\Bbb{R}$ by the $M-$test of Weierstrass.
How to test data for log-normal distribution?
A variable $X$ has a log-normal distribution if and only if $\log(X)$ has a normal distribution, so to test wheter data is log-normally distributed, you can simply test wheter the log-transformed data is normally distributed. "Can data be log-normally distributed by not normally?" Yes of course! In fact it is impossible (apart for the trivial case of a constant) for a variable to be both log-normally and normally distributed at the same time, since the normal distribution can take both negative and positive values, but the log-normal distribution can only take positive values.
Master method to find the tight asymptotic bound
Assume that $T_n=a+b n+c n^2$ and replace $$T( n) - T(5n/7) - n=\frac{2 b n}{7}-n+\frac{24 c n^2}{49}$$ So, $c=0$ and $b=\frac 72$ and then $$T_n=a+\frac 72 n$$
Vector potentials of vector potentials
Following the prescription of this question, given a vector field $\mathbf A_0$ such that $\nabla \cdot \mathbf A_0=0$, we construct a sequence $\mathbf{A}_0, \mathbf A_1, \mathbf A_2, \ldots$ such that $$ \begin{array}{cc}  \nabla \cdot \mathbf A_1 = 0, & \nabla \times \mathbf A_1= \mathbf A_0\\ \nabla \cdot \mathbf A_2 = 0, & \nabla \times \mathbf A_2= \mathbf A_1\\ \nabla \cdot \mathbf A_3 = 0, & \nabla \times \mathbf A_3= \mathbf A_2\\ \vdots & \vdots \end{array}$$ Taking the curl of the right column, and using the formula for the double curl, we see that $$ -\nabla^2 \mathbf A_{n+2} = \mathbf A_n, \quad n=0, 1, 2, \ldots,$$ where $\nabla^2$ denotes the vector Laplacian. This is the Poisson equation and it can be solved by convolution against the Newtonian potential; $$\tag{*} \mathbf A_{n+2}(x)=\int_{\mathbb R^3} \frac{ \mathbf A_n(x-y)}{4\pi \lvert y\rvert}\, dy.$$ Remark. The vector field $\mathbf A_{n+2}$ given by (*) is divergence-free if $\mathbf A_n$ is. The proof is immediate, just take the divergence termwise and interchange differentiation and integration. Thus, (*) has the correct gauge. To gather information about the limit as $n\to \infty$, (*) is impractical. However, taking the Fourier transform, defined as $$ \hat{f}(k)=\int_{\mathbb R^3} f(x)e^{-i x\cdot k}\, dx, $$ we see that $$\tag{**} \hat{\mathbf A}_{n+2}(k)=\frac{\hat{\mathbf A}_n(k)}{\lvert k\rvert^2}.$$ We conclude that $$ \hat{\mathbf A}_n(k)=\begin{cases} \frac{\hat{\mathbf A}_0(k)}{\lvert k\rvert^m}, & n=2m, \\ \frac{\hat{\mathbf A}_1(k)}{\lvert k\rvert^m}, & n=2m+1. \end{cases}$$ In particular: the sequence never reaches a fixed point, except for the trivial cases in which $\mathbf A_0=\mathbf 0$ or $\mathbf A_1 =\mathbf 0$. if $\hat{\mathbf A}_0$ is supported in $\lvert k\rvert <1$, then the sequence $\hat{\mathbf A}_n$ for even $n$ blows up. If it is supported in $\lvert k\rvert >1$, it tends to zero. Similar considerations for $\hat{\mathbf A}_n$ for odd $n$.
Closed form or asymptotic expansion for $\int_0^m \frac{n^x}{\Gamma(x+1)}dx$?
For $\int_0^m\dfrac{n^x}{\Gamma(x+1)}dx$ , by using the formula We have $\int_0^m\dfrac{n^x}{\Gamma(x+1)}dx=m\sum\limits_{s=1}^\infty\sum\limits_{r=1}^{2^s-1}\dfrac{(-1)^{r+1}n^\frac{rm}{2^s}}{2^s\Gamma\left(\dfrac{rm}{2^s}+1\right)}$ For $\int_0^\infty\dfrac{n^x}{\Gamma(x+1)}dx$ , according to http://books.google.hu/books?id=Pl5I2ZSI6uAC&pg=PA262&lpg=PA262&dq=Frans%C3%A9n%E2%80%93Robinson%20constant#v=onepage&q=Frans%C3%A9n%E2%80%93Robinson%20constant&f=false, We have $\int_0^\infty\dfrac{n^x}{\Gamma(x+1)}dx=e^n-\int_{-\infty}^\infty\dfrac{e^{-ne^y}}{y^2+\pi^2}dy$
What is the derivative of the determinant of a symmetric positive definite matrix?
Note, that for symmetric matrix derevative of $\partial X/\partial x_{ij}$ is equal to $$ \frac{\partial X}{\partial x_{ij}} = E_{ij} + E_{ji} - E_{ij}E_{ij} $$ So, for trace of multiplication $X^{-1}$ and $\partial X/\partial x_{ij}$ we have $$ \text {tr}(X^{-1}E_{ij}) + \text {tr}(X^{-1}E_{ji}) - \text {tr}(X^{-1}E_{ij}E_{ij}) = (X^{-1})_{ji}+(X^{-1})_{ij} - (X^{-1})_{ii} $$ and $$ \text {tr}(X^{-1}\partial X/\partial X) = X^{-T} + X^{T} - \text {diag}(X^{-1}) = 2X^{-1} - \text {diag}(X^{-1}) $$ Finally, if $X^{T} = X$, we have that $$ \frac{\det(X)}{\partial X} = \det(X)(2X^{-1} - \text {diag}(X^{-1})) $$ P.S. $E_{ij}$ - zeros matrix with 1 on $ij$ position.
Is the closure of a meager set meager?
As André Nicolas noted, this is not true: the set of rationals $\mathbb{Q}$ (with $\overline{\mathbb{Q}} = \mathbb{R}$) is a counterexample.
Can the Fubini theorem be applied to a trapezoid?
Fubini's theorem says a double integral (where the integral of the absolute value is finite) is equal to either of two iterated integrals. One concrete case: $$ \iint\limits_{[a,b]\times[c,d]} f(x,y)\,d(x,y) = \int_a^b\left(\int_c^d f(x,y) \, dy \right) \, dx = \int_c^d \left( \int_a^b f(x,y)\, dx \right) \, dy, $$ provided that $$ \iint\limits_{[a,b]\times[c,d]} |f(x,y)|\,d(x,y) <\infty. $$ One way to use this theorem is to exploit the fact that one of the two iterated integrals may be readily evaluated. As for trapezoids, here is an example: $$T=\{(x,y) : 1\le y\le2,\ 0\le x\le y \}\tag{1}$$ Suppose one wants $$ \iint\limits_T e^{y^2} \,d(x,y). $$ Applying $(1)$, this becomes $$ \int_1^2 \left( \int_0^y e^{y^2} \, dx \right)\,dy. $$ This is easily evaluated, whereas if one had integrated first with respect to $y$ and afterward with respect to $x$, one would face the intractable integral $\int e^{y^2}\, dy$. Does Fubini's theorem justify this? The answer is "yes" because one can view the integral as $$ \iint\limits_{[0,2]\times[1,2]} f(x,y) \, d(x,y) $$ where $$ f(x,y)=\begin{cases} e^{y^2} & \text{if }x\le y, \\[10pt] 0 & \text{otherwise}. \end{cases} $$
Integral of a polynomial over a three-dimensional ball
This probably isn't the slickest way of presenting the answer, but here goes. We're going to repeatedly use the change of variables theorem in what follows. Let $B$ denote the unit ball in $\Bbb{R}^3$. Then, by the change of variables formula, if $\alpha,\beta,\gamma$ are non-negative integers, then \begin{equation} \int_B x^{\alpha}y^{\beta}z^{\gamma} \, dV = - \int_B x^{\alpha}y^{\beta}z^{\gamma} \, dV = 0, \end{equation} provided that atleast one of $\alpha,\beta,\gamma$ is odd. For example if $\alpha$ is odd, use the change of variables $(x,y,z) \mapsto (-x,y,z)$. If $\beta$ is odd, use $(x,y,z) \mapsto (x,-y,z)$, and use $(x,y,z) \mapsto (x,y,-z)$ if $\gamma$ is odd. For example, $\int_B x \, dV = \int_B yz^2 \, dV = 0$, etc. This works because the region of integration is still the unit ball $B$, the absolute value of the determinant is $1$, but the actual integrand picks up a minus sign. This observation immediately implies that we can ignore "cubic" terms, the "cross-quadratic terms" (like $xy$) and linear terms in $f(x,y,z)$, because these terms all integrate to $0$. So, if we write \begin{equation} f(x,y,z) = \text{cubic terms} + a_1x^2 + a_2y^2 + a_3 z^2 + \text{cross quadratic terms} + \text{linear terms} + c, \end{equation} then to quickly integrate such an expression, notice that (by symmetry in change of variables) that \begin{equation} \int_B x^2 \, dV = \int_B y^2 \, dV = \int_B z^2 \, dV = \dfrac{1}{3} \int_B (x^2 + y^2 + z^2) \, dV \end{equation} The last expression can be easily integrated using spherical coordinates, and the answer is $\dfrac{4\pi}{15}.$ Lastly integrating the constant term means we simply multiply by the volume of the unit ball. Hence, putting this all together, we find that \begin{align} \int_B f \, dV = (a_1 + a_2 + a_3) \cdot \dfrac{4 \pi}{15} + c \cdot \dfrac{4\pi}{3} \tag{$*$} \end{align} What this shows is that if $f$ is a polynomial of degree at most $3$, then integrating over the unit ball only depends on the constant term, and the coefficients of the "pure quadratic terms" (like $x^2,y^2,z^2$). It is easy to verify that \begin{align} a_1+a_2+a_3 = \dfrac{\Delta f(0,0,0)}{2} \quad \text{and} \quad c= f(0,0,0). \tag{$**$} \end{align} So, substituting $(**)$ into $(*)$, we find that \begin{equation} \int_B f\, dV = \dfrac{4 \pi}{3} \cdot f(0,0,0) + \dfrac{2 \pi}{15} \cdot \Delta f (0,0,0). \end{equation} Additional Remarks: I found this question pretty interesting, so I tried to generalize this result to $n$-dimensions, and here's what I came up with so far. I'll prove that If $f: \Bbb{R}^n \to \Bbb{R}$ is a polynomial of degree at most $3$, $B = \{\xi \in \Bbb{R}^n: \lVert \xi\rVert^2 \leq 1\}$ is the closed unit ball, and $dV$ denotes the $n$-dimensional volume element, then \begin{align} \int_B f \, dV &= f(0) \cdot \text{vol}(B) + \dfrac{\text{trace}(D^2f_0)}{2}\cdot \lambda \\ &= f(0) \cdot \text{vol}(B) + \dfrac{\Delta f(0)}{2}\cdot \lambda, \end{align} where $D^2f_0$ is the second differential of $f$ at $0$ (a symmetric bilinear form), and $\Delta = \sum_{i=1}^n \frac{\partial^2}{\partial x_i^2}$ is the Laplacian, and $\lambda$ is a constant, computed by \begin{equation} \lambda := \dfrac{1}{n} \int_B \lVert \xi \rVert^2 \, dV \end{equation} The proof of this is very similar to the one I gave above in the special case. First, note that since $f$ is a polynomial by assumption, it equals its own Taylor polynomial: \begin{equation} f(\xi) = f(0) + Df_0(\xi) + \dfrac{1}{2}D^2f_0(\xi)^2 + \dfrac{1}{6}D^3f_0(\xi)^3 \end{equation} where $D^kf_0$ is a symmetric $k$-linear map from $\Bbb{R}^n \times \cdots \times \Bbb{R}^n$ into $\Bbb{R}$ and $(\xi)^k$ denotes the element $(\xi,\dots, \xi) \in \Bbb{R}^n \times \cdots \times \Bbb{R}^n$ (k-products). Hence to compute $\int_B f \, dV$, we have to sum up $4$-terms. By an almost identical argument I gave above, the cubic and linear terms all vanish after integration: \begin{equation} \int_B Df_0(\xi) \, dV = \int_V D^3f_0(\xi)^3 \, dV = 0. \end{equation} So, we have that \begin{align} \int_B f \, dV &= \int_B \left [f(0) + \dfrac{1}{2} D^2f_0(\xi)^2 \right] \, dV \\ &= f(0)\cdot \text{vol}(B) + \dfrac{1}{2} \int_B D^2f_0(\xi)^2 \, dV \end{align} In the second term, $D^2f_0(\xi)^2$ is a sum of terms of the form $\xi_i \xi_j \cdot (\partial_i \partial_j f)(0)$. But now notice that (again change of variables) if $i \neq j$ then \begin{equation} \int_B \xi_i \xi_j \, dV = - \int_B \xi_i \xi_j \, dV = 0 \end{equation} Hence, the only contribution to the integral comes from terms where $i=j$ "the diagonal terms". More precisely, \begin{align} \int_B D^2f_0(\xi,\xi) \, dV &= \sum_{i=1}^n (\partial_i^2 f)(0) \cdot \left( \int_B (\xi_i)^2 \, dV \right) \tag{$\ddot{\smile}$} \end{align} But now notice that (again symmetry in change of variables) that \begin{align} \int_B (\xi_1)^2 \, dV = \dots = \int_B (\xi_n)^2 \, dV = \dfrac{1}{n} \int_B \lVert \xi \rVert^2 \, dV =: \lambda \tag{$\ddot{\smile} \ddot{\smile}$} \end{align} Substituting $\ddot{\smile} \ddot{\smile}$ into $\ddot{\smile}$ yields the result \begin{align} \int_B D^2f_0(\xi)^2 \, dV &= \lambda \cdot \sum_{i=1}^n (\partial_i^2f)(0) \\ &= \lambda \cdot \Delta f(0) = \lambda \cdot \text{trace}(D^2f_0) \end{align} This proves that \begin{equation} \int_B f \, dV = f(0) \cdot \text{vol}(B) + \dfrac{\text{trace}(D^2f_0)}{2}\cdot \lambda \end{equation} In the case $n=3$, everything was nice because we could easily compute $\text{vol}(B)$ and $\lambda$ explicitly using spherical coordinates. In higher dimensions, this will necessarily be more complicated, and I think it will involve the use of gamma functions and stuff. Also, if we allow for higher order polynomials, I'm pretty sure we can get a formula involving $f(0),D^2f_0, D^4f_0,D^6f_0 \dots$, although things will probably get more messy.
Rank of the sum of an all-ones matrix and an identity matrix
Notice the matrix rank is subadditive. i.e. for any $n\times n$ matrix $P, Q$, we have $${\rm rank}(P+Q) \le {\rm rank}(P) + {\rm rank}(Q)$$ Since ${\rm rank}(B) = 1$, we have $$n = {\rm rank}(-I) = {\rm rank}\left(A - \frac{B}{n}\right) \le {\rm rank}(A) + {\rm rank}\left(-\frac{B}{n}\right) = {\rm rank}(A) + 1$$ This implies ${\rm rank}(A) \ge n - 1$. Together with the inequality you have ${\rm rank}(A) < n$, you can deduce ${\rm rank}(A) = n -1$.
Compute a natural number $n\geq 2$ s.t. $p\mid n \Longrightarrow p^2\nmid n$ AND $p-1\mid n \Longleftrightarrow p\mid n$ for all prime divisor p of n.
Let $$n=p_1p_2\cdots p_k$$ where all $p_i$ are distinct and $p_i<p_j$ iff $i<j$ (this number is square-free, since $p|n$ implied $p^2\not|n$ for a prime $p$). Now, since $p_i-1|n$, we know that either $k=1$ and $n=2$, or $k>1$ and $2|n$, so $$n=2p_2p_3\cdots p_k$$ now $p_2-1|n$ and $p_2-1$ can only have a factor $2$, since $p_2$ is the smallest prime factor of $n$ greater than $2$ - so, $p_2=3$. If $k=2$, then $n=6$. Now let $k>2$. Then $p_3-1$ can only have a factor $2$ and $3$ - so $p_3-1\in\{2,3,6\}$ or $p_3\in\{3,4,7\}$. Since $p_3$ cannot be $3$ as $p_3>p_2$, and not $4$ since it's not prime, $p_3=7$. So if $k=3$, we know $k=3$, then $n=42$. Now let $n>4$. Then $p_4-1$ can only have a factor $2$, $3$, $7$: so $$p_4-1\in\{2,3,6,7,14,21,42\}$$ so $$p_4\in\{3,4,7,8,15,22,43\}$$ Since $p_4$ needs to be a prime and $p_4>p_3$, we know $p_4=43$. So if $k=4$ then $n=1806$. Now let $k>4$. Rinse repeat. $p_5$ has divisors $2$, $3$, $7$, $43$, blah blah blah, $$p_5-1\in\{2,3,6,7,14,21,42,43,86,129,258,301,602,903,1806\}$$ so $$p_5\in\{3, 4, 7, 8, 15, 22, 43, 44, 87, 130, 259, 302, 603, 904, 1807\}$$ and all these numbers are composite or smaller or equal to $43$, thus, $p_5$ doesn't exist. Finally, we're done. We have all solutions: $$n\in\{2,6,42,1806\}$$
Find a positive matrix near a non-negative matrix
Yes. By permuting the rows and columns of $A$ simultaneously, we may assume that $A=A_r\oplus0$, where $A_r$ is some $r\times r$ irreducible matrix whose indices of inertia are $(n_+,n_-)=(1,r-1)$. Let $\gamma=\frac{\epsilon}{n-r+1}$ and let $A_r=Q\,\operatorname{diag}(\rho(A),\lambda_2,\ldots,\lambda_r)\,Q^T$ be an orthogonal diagonalisation, where the first column of $Q=\pmatrix{u_r&V}$ is the Perron vector of $A_r$. Pick any sufficiently small $t>0$ such that $tu_ru_r^T-t^2VV^T$ is entrywise positive (this is possible because $u_r$ is a positive vector) and $\|tu_ru_r^T-t^2VV^T\|_F^2<\gamma$. Let \begin{aligned} B_r&=A_r+Q\,\operatorname{diag}(t,-t^2,\ldots,-t^2)\,Q^T\\ &=Q\,\operatorname{diag}(\rho(A_r)+t,\,\lambda_2-t^2,\ldots,\,\lambda_r-t^2)\,Q^T. \end{aligned} Clearly $B_r$ is symmetric and it has the same Perron vector and indices of inertia as $A_r$, but unlike $A_r$, this $B_r$ is entrywise positive because $$ B_r-A_r=Q\,\operatorname{diag}(t,-t^2,\ldots,-t^2)\,Q^T=tu_ru_r^T-t^2VV^T $$ is entrywise positive. Also, note that $\|A_r-B_r\|_F^2=\|tu_ru_r^T-t^2VV^T\|_F^2<\gamma$. $B_r$ is only $r\times r$, not $n\times n$. We now try to enlarge its size and grow the number of negative eigenvalues by one. Pick a sufficiently small number $t_r>0$ such that $\frac{1}{t_r}>\rho(B_r)$ and $2t_r^2+t_r^6<\gamma$. Define $$ B_{r+1}=\pmatrix{B_r&t_ru_r\\ t_{r+1}u_r^T&t_r^3}\in M_{r+1}(\mathbb R). $$ Clearly $B_{r+1}$ is symmetric and entrywise positive. It is also congruent to $\left(B_r-\frac{1}{t_r}u_ru_r^T\right)\oplus t_r^3$. Since $\frac{1}{t_r}>\rho(B_r)$ and all eigenvalues except $\rho(B_r)$ are negative, $B_r-\frac{1}{t_r}u_ru_r^T$ is negative definite. It follows that the indices of inertia of $B_{r+1}$ are $(n_+,n_-)=(1,r)$. Similarly, if we take $u_{r+1}$ as the Perron unit vector of $B_{r+1}$ and pick some $t_{r+1}>0$ such that $\frac{1}{t_{r+1}}>\rho(B_{r+1})$ and $2t_{r+1}^2+t_{r+1}^6<\gamma$, we can construct some $B_{r+2}\in M_{r+2}(\mathbb R)$ whose indices of inertia are $(n_+,n_-)=(1,r+1)$. Continue in this manner, we can finally obtain a symmetric and entrywise positive matrix $B_n\in M_n(\mathbb R)$ with one positive eigenvalue and $n-1$ negative eigenvalues. By construction, we have $$ \|A-B_n\|_F^2 =\|A_r-B_r\|_F^2+\sum_{k=r}^{n-1}(2t_k^2+t_k^6) <(n-r+1)\gamma=\epsilon. $$ Hence we may take $B=B_n$.
how can we show $\frac{\pi^2}{8} = 1 + \frac1{3^2} +\frac1{5^2} + \frac1{7^2} + …$?
From the Basel Problem, we have $$\frac{\pi^2}{6} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2}\dots$$ $$\frac{\pi^2}{24} = \frac{\pi^2}{6\cdot 2^2} = \frac{1}{2^2} + \frac{1}{4^2} + \frac{1}{6^2}\dots$$ so that $$\begin{align}\frac{\pi^2}{8} &= \frac{\pi^2}{6} - \frac{\pi^2}{24}\\&=\frac{1}{1^2} + \frac{1}{3^2} + \frac{1}{5^2} + \dots \end{align}$$
How to find the Maximum Likelihood Estimator, the expected lifetime and the probability of the following situation?
You are right that $t_i\sim \exp(k\theta).$ More formally, $t_i\le T$ if and only if each of the $k$ devices lasts longer than $T.$ Each has a probability $e^{-\theta T}$ of doing so and is independent so $P(t_i\le T) = (e^{-\theta T})^k.$ From there your maximum likelihood estimation for $\theta$ is correct. For the second part, I'm confused too. When I hear "expected lifetime" I think of the true mean $\frac{1}{\theta},$ which is of course unknown. If you plug in the MLE you get an ML-estimated version of this which based on the question structure is probably what they're looking for but I don't think they phrased it well. You could also interpret what they asked in a Bayesian way (i.e. what is the expected value of the lifetime under the posterior distribution), which would actually make the most sense here, but I doubt that's what they're going for since they haven't given or mentioned priors. Same goes for the others of course.
Find an equation of the plane that passes through the point $(1,2,3)$, and cuts off the smallest volume in the first octant. *help needed please*
The volume of a pyramid (of any shaped base) is $\frac13A_bh$, where $A_b$ is the area of the base and $h$ is the height (perpendicular distance from the base to the opposing vertex). In this particular case, we're considering a triangular pyramid, with the right triangle $OAB$ as a base and opposing vertex $C$. The area of the base is $\frac12ab$, and the height is $c$, so the volume of the tetrahedron is $\frac16abc$--equivalently, $abc$ is $6$ times the volume of the tetrahedron.
Relationship between Turing bifurcation, saddle-node bifurcation, and Hopf bifurcation?
Generally, one real eigenvalue crossing the imaginary axis is called saddle-node bifurcation. A pair of complex eigenvalues crossing the imaginary axis is referred to as Hopf bifuraction. This is true, however, some additional conditions have to be met to identify the bifurcation as saddle-node or Hopf. This is less important with Hopf bifurcation since the appearance of two purely imaginary eigenvalues in vast majority cases implies appearance (or disappearance) of a unique limit cycle, but in the case of simple real eigenvalue crossing the imaginary axis there are at least two more cases which are often met in applications: transcritical bifurcation and pitchfork bifurcation. At a qualitative level: saddle-node bifurcation corresponds to the appearance "out of nowhere" two steady states, one is stable, another is unstable transcritical bifurcation corresponds to the case when two steady states exchange, upon meeting each other, their stabilities pitchfork bifurcation corresponds when from one, say stable, equilibrium, two more stable equilibria appear and the original one becomes unstable. The exact conditions which type you encounter depend on the type of the normal form of the equation you consider. See, e.g, this book. Are these two types of bifurcations exactly the ones that occur for the PDE (while the steady state remains stable for the ODE) in what is generally referred to as Turing bifurcation / Turing instability? Yes, in the spatially explicit system all these four types can appear in a similar way how they appear in local systems described by ODEs. If so, what are the qualitative differences, if any, between the patterns formed past the Turing bifurcation? In the case of simple zero eigenvalue what is usually expected is an appearance of a spatially non-homogeneous steady state (or more than one). This steady state can be stable generating spatially non-homogeneous patterns, which are used in various problems of pattern formation and morphogenesis. A lot of details are given, e.g., in James Murray's book Mathematical Biology, together with some technical details. Hopf bifurcation yields not only spatially non-homogeneous solutions, but also temporally periodic oscillations. P.S. And final remark. The name "Hopf bifurcation" is quite unfortunate. The fact that periodic solutions appear under parameter change when two eigenvalues cross imaginary axis was known to the father of qualitative analysis of dynamical systems -- Poincaré. The exact statements of the theorem in the planar case, together with proofs, were given by Andronov. The main contribution of Hopf is generalization of this situation to $n$-dimensional case. So the correct name would be Poincaré-Andronov-Hopf bifurcation, or, in my opinion, a much better option is the bifurcation of the birth of limit cycle :)
Burnside convolution
Let me restate your question for groups. Transitive $G$-sets modulo isomorphisms are in bijective correspondence with the set of subgroups modulo conjugation, and finite ones correspond to finite index subgroups. For subgroups $A,B,C$ of $G$, the condition $b(G/A,G/B,G/C)\neq 0$ means that there exist conjugates $A',B'$ of $A,B$ such that $C=A'\cap B'$. Since every finite index subgroup is contained in only finitely many subgroups, this leaves finitely many possibilities for $A',B'$, and hence, up to conjugation, finitely many possibilities for $A,B$. So the answer (to question 1) is yes.
Water filling problem in Blocks - Algebra Question
You Can solve this problem inn this way.but we dont sure about complexity of the code public static void main(String args[]) { int inputArray[] = { 3, 3, 4, 4, 4, 2, 3, 1, 3, 2, 1, 4, 7, 3, 1, 6, 4, 1 }; int totalWater = GetWaterLevel(3, 6, inputArray); System.out.println("Total Water=" + totalWater); } static int prows; static int pcolumns; static int pheightsArray[]; static boolean hasWall(int row, int column, int atHeight, int directionIndex) { if (row == 0 || row == prows - 1 || column == 0 || column == pcolumns - 1) return false; // boundary check if (directionIndex == 1) { // up if (pheightsArray[(row - 1) * pcolumns + column] > atHeight) return true; else return (hasWall(row - 1, column, atHeight, 1) && hasWall(row - 1, column, atHeight, 3) && hasWall( row - 1, column, atHeight, 4)); } else if (directionIndex == 2) { // down if (pheightsArray[(row + 1) * pcolumns + column] > atHeight) return true; else return (hasWall(row + 1, column, atHeight, 2) && hasWall(row + 1, column, atHeight, 3) && hasWall( row + 1, column, atHeight, 4)); } else if (directionIndex == 3) { // left if (pheightsArray[row * pcolumns + column - 1] > atHeight) return true; else return (hasWall(row, column - 1, atHeight, 1) && hasWall(row, column - 1, atHeight, 2) && hasWall( row, column - 1, atHeight, 3)); } else if (directionIndex == 4) { // right if (pheightsArray[row * pcolumns + column + 1] > atHeight) return true; else return (hasWall(row, column + 1, atHeight, 1) && hasWall(row, column + 1, atHeight, 2) && hasWall( row, column + 1, atHeight, 4)); } return false; } public static int GetWaterLevel(int input1, int input2, int input3[]) { // Write code here int rows = input1; int columns = input2; int heightsArray[] = input3; prows = rows; pcolumns = columns; pheightsArray = heightsArray; int total = 0; int row = 0; int column = 0; int maxHeight = 0; int atHeight = 0; for (row = 0; row < rows * columns; row++) if (input3[row] > maxHeight) maxHeight = input3[row]; for (row = 0; row < rows; row++) { for (column = 0; column < columns; column++) { // check if cell on boundaries if (row == 0 || row == rows - 1 || column == 0 || column == columns - 1) continue; // for each cell, rise from the cell's top to the max height // possible(can be optimized) // if at any height, cell is not walled, break loop for cell for (atHeight = heightsArray[row * columns + column]; atHeight < maxHeight; atHeight++) { if (hasWall(row, column, atHeight, 1) && // up hasWall(row, column, atHeight, 2) && // down hasWall(row, column, atHeight, 3) && // left hasWall(row, column, atHeight, 4)) { // right total++; } else atHeight = maxHeight; // break height loop for cell } } } return total; }
How to find the total distance traveled, given the position function?
You should integrate the absolute value of velocity from 0 to 3. Than you get the desired result.
Modification of distribution function
To anyone who stumles upon this. The solution is pretty simple. You just need to realize what x means in this case. It is number of seconds. So I a case where Computing takes 30% less then y = 0.7x So the distribution function is the same, you just substitute x for 0.7x
How do I find the minimum of this function?
$$f(x) = (x-b)^TAx = x^TAx - b^TAx$$ $$\frac{\partial}{\partial x}f(x) = (A+A^T)x - A^Tb = 0$$ that is the minimizer should satisfy $$(A+A^T)x^* = A^Tb $$ If $A$ is invertible then $$x^* = (A+A^T)^{-1}A^Tb = \big(A^{-T}(A+A^T)\big)^{-1}b = (I + A^{-T}A)^{-1} b$$
Collision detection between two accelerating spheres with no initial velocity?
Find both trajectories. Calculate the distance $d$ between both centers with respect to $t$. Find if there is a solution - such a $t$ that $d=r_1+r_2$. That's the outline. You got to point out if you got specific problems with some of the steps. EDIT: It seems to me that your equations are a bit wrong. First coordinate of the first sphere will be $x_1+\frac{a_{1i}t^2}{2}$ at moment $t$. So, if I call that $\vec{x}_1=(x_1,y_1,z_1)$ and similarly introduce acceleration vector, the position at $t$ will be given by $\vec{x}_1+\frac{\vec{a}_1t^2}{2}$. You should then write both these vectors and calculate the distance vector between them (just subtract). You must find length of this vector and find if there exists such a $t$ that the length is equal (or less) to $r_1+r_2$.
How to show that, for a linear code C, that the covering radius of the code is the same as the weight of the coset of largest weight.
"Hint": Can you show that (here $w\in C$) if $$ d(x,w)=\min_{c\in C}d(x,c), $$ then $w-x$ is a leader of the coset $-x+C$? And conversely :-)
Singular and Sheaf Cohomology
The cohomological dimension of a real $n$-manifold $M$ is $n$: this means that $H^i(M,\mathscr F)=0$ for each sheaf $\mathscr F$ of abelian groups on $M$ if $i>n$, and that there exist sheaves $\mathcal F$ on $M$ with $H^n(M,\mathscr F)\neq0$. You'll find this proved in Bredon's book on Sheaf theory, §II.16. It follows that the cohomological dimension of a complex $n$-manifold is $2n$. For example, you reach the maximum, at least for compact ones, for the constant sheaf $\mathbb R$. The answer to your «Does this means that even the k'th singular cohomology of X vanishes for k > n??» question is No (You can answer it without determining the cohomological dimension: just consider a compact complex $n$-manifold, which is automatically oriented: what is it $2n$-th cohomology group?) The question in you [Edited] paragraph also has a negative answer. Consider examples to see that it is so.
Rectangular to polar form using exact values.
You can solve this equation exactly. Note that you can write $$ \tan t =\frac{\sin t}{\cos t} = \frac{\frac{1}{2i}(e^{it}-e^{-it})}{\frac12(e^{it}+e^{-it})}=\frac 1i\frac{e^{2it}-1}{e^{2it}+1} $$ Now suppose we are given $w$ (in your case, $w=2-\sqrt 3$); to solve $$w=\tan t$$ we use the expression above to write $$w = \frac 1i\frac{e^{2it}-1}{e^{2it}+1}$$ $$iw(e^{2it}+1) = e^{2it}-1$$ $$e^{2it}(1-iw) =1+iw$$ $$e^{2it}=\frac{1+iw}{1-iw}$$ Using your particular value $w=2-\sqrt 3$, the expression $\frac{1+iw}{1-iw}$ simplifies after some tedious basic arithmetic to $\frac{\sqrt 3}{2}+\frac12 i =e^{\pi i/6}$. Our equation, therefore, is $$e^{2it} =e^{\pi i/6}$$ which means $$2it = \frac{\pi}{6}i +2\pi ki$$ $$\boxed{t =\dfrac{\pi}{12} + \pi k}$$ for integral $k$, as desired.
Minimize cost based on constraints given
Well, since you don't know how many printers of each type are hired, it only makes sense to give them names, i.e. use variables for these unknown quantities. For example, let's say $x$ printers of type A have been hired. How many documents will they print in one day? Since each printer of type A prints $40$ documents in a day, $x$ printers of type A will print $40x$ documents in a day. And what is the associated cost? For one printer of type A we have the fixed costs of Rs. $100$ plus $40$ pages at Rs. $4$ each for a total of Rs. $100+40\cdot4=260$; therefore $x$ of them will cost us Rs. $260x$ per day. Do the same for printers of types B and C. In fact, for type B you don't even need a variable, since you're given that there are $10$ of them. Then set up two expressions: one representing the total cost and one representing the total number of documents. Now your goal is to minimize the total cost function under the constraint that the total number of documents is $1500$.
Grassmannians Question, topology
Put a Hermitian(compatible) metric on $\mathbb C^{n+r}$. The Euclidean metric would be fine. Then send any complex $n$-plane to its orthogonal complement complex $r$-plane.
When $a$ is even, the difference between $(a/2) \mod N$ and $(a \mod N)/2$?
If I read the question correctly, it could be restated as "Prove that for even $N$ the equality $a/2\mod n \equiv (a\mod n)/2$ holds." Assuming this was what you meant, I think you stumbled upon something very important that everyone has to realise by some point, or be told at som point, namely, what invertibility actually means. When you write down $\frac{1}{2}$, just as a real number, what do you actually mean? You mean another real number, say $\alpha$, such that $\alpha\cdot 2=1$. This sounds like it could be a trivial observation and maybe it is. However, when we define inverses the same way working modulo some integer $N$, this is a nice and correct definition, yet still it is where trouble may arise. To illustrate this we look precisely at even $N=2k$. Which elements do we get when we multiply the number 2 with every element of $\mathbb{Z}_N$? The elements you get will be precisely $\{0,2,4,\ldots,2(k-1)\}$ and the important thing to notice is that $1$ is not an element of this set. This shows that it is impossible to invert the element when working modulo some even number. Can you now think why $2$ is always invertible, working modulo some odd number? In fact, we even have that $m$ is invertible modulo $N$, if and only if $\gcd(m,N)=1$. If this was not what you tried to ask, I'm afraid I simply didn't understand your question correctly. There's also a remark I'd like to make about the previous comment by Gammatester. I strongly disagree with his comment on notation, since it confuses working in $\mathbb{Z}$ and working in $\mathbb{Z}_N$. On the left-hand side, he takes an element in $\mathbb{Z}_N$ and then pretends all of a sudden it's an element of $\mathbb{Z}$ (which it most certainly isn't!), before dividing it. On the right-hand side he does the converse thing. Once working in either $\mathbb{Z}$ or $\mathbb{Z}_N$, one should remain there or be very explicit otherwise. Unless one wants to utterly confuse the reader, of course.
Define function as a composition
I have a function $f$ which takes another function $h(x_i)$ and weight vector $\vec{w}$ with $i$ components: $$f := \sum_i h(x_i)w_i$$ If I'm understanding you right, I would write this definition by listing out the names of the parameters (which are $h$ and $w$), like this: $$f(h, w) := \sum_i h(x_i)w_i$$ I have a question, though: what does $x_i$ mean here? You didn't say that $x_i$ is one of the parameters to the function $f$. Is $x_i$ defined somewhere else? Edit: I have a few remarks after reading your question update and your comment: The definition I wrote above, $f := \sum_i h(x_i)w_i$, raises a question in my mind: where does the value of $x_i$ come from? In other words, how does $f$ know what the value of $x_i$ is? If the answer is "the value of $x_i$ is defined somewhere else", that's fine, but I need to know that in order to know if this is a valid definition. If the answer is "$f$ takes $x$ as a parameter", then you need to edit the parameter list of $f$ to $f(h, x, w)$ or something, so that we know that it's a parameter. Since the parameter list of $f$ contains $h$, this means that there are multiple different functions $h$ that $f$ can use in order to calculate $\sum_i h(x_i)w_i$, and $f$ will use whatever version of $h$ it is given. However, you also gave a definition for $h$, which seems to imply that there is only one function $h$ that $f$ will ever use to calculate $\sum_i h(x_i)w_i$. If there are multiple different functions $h$ that $f$ can use, then there are two things you've named $h$ (the function you've defined, and the function which is a parameter to $f$), and you should change the name of one or the other. If there is only one function $h$ that $f$ can use, then you should remove $h$ from the parameter list of $f$. In the definition of $f$, you are applying $h$ to just one parameter, but your definition of $h$ says that $h$ takes two parameters. Which one is it? You asked about writing the definition using the equals sign $=$ instead of the defined-as-equal symbol $:=$. That's totally fine. (I probably would have used $=$ instead of $:=$ myself.) But if you do use the equals sign, make sure it's clear that you're writing a definition and not a mere statement of equality. Now, I suspect I know what you're trying to communicate, but I'm not sure. If my hunches are right, then here are two ways of writing it. Here is the "explicit argument" style: For all such-and-such kind of non-linear transformations $\sigma$, weight vectors $\hat w$, and input vectors $x$, define $$h_{\sigma, \hat w}(x) = \sigma \left (\sum_i x_i \hat w_i \right).$$ Then, for all functions $a$ with the same domain and codomain as $h_{\sigma, \hat w}$ defined above, and all weight vectors $w$ and input vectors $x$, define $$f(a, x, w) = \sum_i a(x_i)w_i.$$ Now that we are done with definitions, suppose we have such-and-such type of things $a$, $b$, and $c$, whose values are such-and-such. Then we can say such-and-such about $f(a, b, c)$... And here is the "implicit argument" style: Suppose we have such-and-such kind of non-linear transformation $\sigma$ and weight vector $\hat w$. Then, for all input vectors $x$, define $$h(x) = \sigma \left (\sum_i x_i \hat w_i \right).$$ Furthermore, suppose that we also have another weight vector $w$. Then define $$f(x) = \sum_i h(x_i)w_i.$$ Now that we are done with definitions, suppose that $\sigma$, $\hat w$, and $w$ have such-and-such values. Suppose also that we have an input vector $b$ whose value is such-and-such. Then we can say such-and-such about $f(b)$... The only substantial difference between the above two excerpts is that in the first one, $f$ can use any possible function for $a$, whereas in the second one, $f$ is defined as always using the given function $h$.
Is $\exists a \in A. \forall b \in B \Rightarrow P(a, b)$ equivalent to $\forall c \in B. \exists d \in A \Rightarrow P(c, d)$?
No. It is not true that there exists a woman on Earth (a in A) such that for every man on Earth (b in B), the woman is his mother [P(a,b)]. However, it is true that for every man on Earth (c in B) there exists a woman on Earth (d in A) such that the woman is his mother [P(c,d)].
Tricky (extremal?) combinatorics problem
The keyword here is block design (and resolvable block designs). Update: The maximum number of rounds is between $9$ and $11$. Person 1 sits with $3$ distinct people in each round, so we can have at most $\lfloor (36-1)/3 \rfloor=11$ rounds. (We could similarly argue that there are $\binom{36}{2}$ pairs and pairs are used $9\binom{4}{2}$ at a time, giving $\leq 11$ possible rounds.) $9$ rounds is possible, and can be constructed as follows: Step 1: Take three mutually orthogonal Latin squares of order $9$ (which exist; in fact, a finite field construction gives $8$-$\mathrm{MOLS}(9)$ (ref.)). Call these $L_1,L_2,L_3$ and assume $L_1$ uses the symbol $\{1,\ldots,9\}$, $L_2$ uses the symbol $\{10,\ldots,18\}$ and $L_3$ uses the symbol $\{19,\ldots,27\}$. Step 2: Take the "union" of these matrices to form a $9 \times 9$ matrix in which each cell $(i,j)$ contains $\{L_1[i,j],L_2[i,j],L_3[i,j]\}$. Step 3: Add symbol $28$ to the sets in the first column, $29$ to the sets in the second column, and so on, up to $36$ in the last column. We can check there are no pairs that occur twice case-by-case: if $x$ and $y$ (with $x \neq y$) occurs in two sets, then either (a) $(x,y)$ occurs in some $(L_i,L_j)$ twice, contradicting the orthogonal property, or (b) both $y$s occur in the same column, either contradicting that the $L_i$s are Latin squares, or contradicting $x \neq y$. As a specific example: [[28,1,10,19],[29,4,13,22],[30,7,16,25],[31,2,11,20],[32,5,14,23],[33,6,15,24],[34,3,12,21],[35,9,18,27],[36,8,17,26]] [[28,4,16,23],[29,7,10,24],[30,1,13,20],[31,5,15,27],[32,6,11,26],[33,2,14,21],[34,9,17,22],[35,8,12,25],[36,3,18,19]] [[28,7,13,26],[29,1,16,21],[30,4,10,27],[31,6,14,25],[32,2,15,19],[33,5,11,22],[34,8,18,24],[35,3,17,20],[36,9,12,23]] [[28,2,12,22],[29,5,18,25],[30,6,17,19],[31,3,10,23],[32,9,13,24],[33,8,16,20],[34,1,11,27],[35,4,14,26],[36,7,15,21]] [[28,5,17,24],[29,6,12,20],[30,2,18,23],[31,9,16,26],[32,8,10,21],[33,3,13,27],[34,4,15,25],[35,7,11,19],[36,1,14,22]] [[28,6,18,21],[29,2,17,27],[30,5,12,26],[31,8,13,19],[32,3,16,22],[33,9,10,25],[34,7,14,20],[35,1,15,23],[36,4,11,24]] [[28,3,11,25],[29,9,14,19],[30,8,15,22],[31,1,12,24],[32,4,18,20],[33,7,17,23],[34,2,10,26],[35,5,13,21],[36,6,16,27]] [[28,9,15,20],[29,8,11,23],[30,3,14,24],[31,4,17,21],[32,7,12,27],[33,1,18,26],[34,5,16,19],[35,6,10,22],[36,2,13,25]] [[28,8,14,27],[29,3,15,26],[30,9,11,21],[31,7,18,22],[32,1,17,25],[33,4,12,19],[34,6,13,23],[35,2,16,24],[36,5,10,20]] It's not clear to me if more rounds than this would be possible. I wrote some code that found a bajillion random $8$-round examples, e.g.: [[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16],[17,18,19,20],[21,22,23,24],[25,26,27,28],[29,30,31,32],[33,34,35,36]] [[1,7,20,32],[2,5,17,29],[3,8,9,15],[4,18,26,34],[6,12,16,28],[10,22,30,35],[11,24,31,33],[13,19,21,27],[14,23,25,36]] [[1,8,23,29],[2,10,14,33],[3,6,21,26],[4,12,13,22],[5,20,28,31],[7,11,27,35],[9,16,18,25],[15,19,32,34],[17,24,30,36]] [[1,11,15,28],[2,7,18,30],[3,16,22,29],[4,9,27,33],[5,19,26,36],[6,10,20,25],[8,14,24,34],[12,17,21,31],[13,23,32,35]] [[1,12,24,27],[2,8,19,31],[3,11,32,36],[4,16,20,35],[5,9,14,22],[6,15,30,33],[7,17,23,26],[10,13,18,28],[21,25,29,34]] [[1,13,26,31],[2,16,23,34],[3,5,25,35],[4,10,15,36],[6,18,22,27],[7,12,19,33],[8,17,28,32],[9,20,24,29],[11,14,21,30]] [[1,16,21,36],[2,11,22,26],[3,10,31,34],[4,19,23,28],[5,18,32,33],[6,9,13,17],[7,15,24,25],[8,20,27,30],[12,14,29,35]] [[1,17,22,25],[2,15,21,35],[3,13,20,33],[4,7,14,31],[5,10,23,27],[6,11,19,29],[8,12,18,36],[9,28,30,34],[16,24,26,32]] None of these $8$-round examples were extendable. The random examples made it look like finding a $9$-round example this way would be difficult -- there were very few possible table combinations remaining, let alone finding $9$ simultaneous table combinations including every person. In the above example, for instance, no valid table can be formed with person $6$.
Does $\{nz^n\}_{n\in\mathbb N}$ converge uniformly in the open unit disc of $\mathbb{C}$?
The sequence of functions $$f_n(z)=nz^n, \,\,\,n\in\mathbb N, $$ DOES ΝΟΤ converge uniformly to $0$ in the open unit disc $D$ as $$ f_n(z_n)=1, \quad\text{for}\,\,\, z_n=n^{-1/n}\in D, $$ even worse $$ f_n(z_n)=\frac{n}{2}, \quad\text{for}\,\,\, z_n=2^{-1/n}\in D. $$ In fact $$ \sup_{z\in D}\lvert\, f_n(z)\rvert=n, $$ and therefore $f_n$ DOES NOT converge uniformly in the open unit disc. Note that if $f_n:X\to\mathbb C$ converges uniformly to $f$, then for every $x_n\to x$, then $f_n(x_n)\to f(x)$. On the other hand $f_n(z)=nz^n$ DOES converge uniformly to $0$ in any open disc $D_r$, $r<1$, since $$ \sup_{z\in D_r}\lvert f_n(z)\rvert=nr^r\to 0, $$ as $n\to \infty$. This can be proved using for example ratio test for sequences.
Growth of modified binomial recurrence
If you use the same initial conditions as the binomial coefficients, you get the formula $$ f(n,r) = \binom{\lfloor \frac{n+r}{2} \rfloor}{r}, $$ from which you can extract asymptotics. Fun fact: $\sum_{r=0}^n f(n,r) = F_{r+2}$, where $F_r$ is the $r$'th Fibonacci number.
Yes or No, Is the Concept of a Tangent (Tangent Line) Generally Clearly Understood?
COMMENT ►From viewpoint of tangent as a right line touching a given curve at one point, let me give two examples of curves. $$(1)\hspace2cmy= |x-3|+2\\(2)\hspace2cmy^2=(x-3)^3$$ The curve $(1)$ has in the point $(3,2)$ infinitely many tangents and so is for the curve $(2)$ in the point $(3,0)$. ►From viewpoint of tangent as limiting position of secants we have for the curve $(1)$ only two (distinct) tangents while for the curve $(2)$ there are just one tangent. (nevertheless the curve is singular and is not an elliptic curve). ►Is for many reasons that the definition of tangent in a point of a curve is given for differentiable curves in the considered point.
Calculating the momentum - notation issue
We know that velocity $v$ is an integral of acceleration: $v(t)=\int a(t)dt$ More precise it is $$ v(t) - v(t_0) = \int\limits_{t_0}^t a(\tau) \, d\tau $$ Momentum is $p=mv$. I'd like to derive the formula for $\Delta p$ (the change of momentum in time). $\Delta p=m(v_2 - v_1)=m(v(t_2) - v(t_1)) = m(\int a(t_2)dt_2 - \int a(t_1)dt_1)$. $$ \begin{align} \Delta p &= m(v_2 - v_1) \\ &= m(v(t_2) - v(t_1)) \\ &= m \left( v(t_0) + \int\limits_{t_0}^{t_2} a(\tau) \, d\tau - v(t_0) - \int\limits_{t_0}^{t_1} a(\tau) \, d\tau \right) \\ &= m \int\limits_{t_1}^{t_2} a(\tau) \, d\tau \\ &= \int\limits_{t_1}^{t_2} F(\tau) \, d\tau \end{align} $$
Conditional expectation for random walks
Hint: The distributions of $(X_1,S_n)$ and $(X_k,S_n)$ coincide hence $E(X_1\mid S_n)=E(X_k\mid S_n)$ for every $1\leqslant k\leqslant n$ (this uses the fact that conditional expectations only depend on joint distributions). Summing these over $k$, one gets $$n\,E(X_1\mid S_n)=\sum_{k=1}^nE(X_k\mid S_n)=E\left(\left.\sum_{k=1}^nX_k\right| S_n\right)=\cdots$$
Comparing large values
$8^9 > 9^8$ In fact it is more than $3$ times greater. Which would suggest that $8^{8^9} \gg 9^{9^8}$ ($\gg$ means significantly greater) So then the next level $9^{8^{8^9}}$ is a larger base to a larger exponent than $8^{9^{9^8}}$
Confusion related to calculation of derivative
Your notation suggests that $s_1$ and $s_2$ are functions of $\theta$ (alone) - is that correct? Let's just consider $$P = \left(\begin{array}{cc}s_1 & 0 \\ 0 & s_2\end{array}\right)$$ to begin. (assuming $A$ is a constant matrix, it has no effect on this calculation). Then $$ \frac{\partial P}{\partial s_1} = \left(\begin{array}{cc}1 & 0 \\ 0 & 0\end{array}\right); \quad \frac{\partial P}{\partial s_2} = \left(\begin{array}{cc}0 & 0 \\ 0 & 1\end{array}\right) $$ so $$ \frac{\partial P}{\partial s_1}\frac{\partial s_1}{\partial \theta} + \frac{\partial P}{\partial s_2}\frac{\partial s_2}{\partial \theta} = \left(\begin{array}{cc}1 & 0 \\ 0 & 0\end{array}\right)\frac{\partial s_1}{\partial \theta} + \left(\begin{array}{cc}0 & 0 \\ 0 & 1\end{array}\right)\frac{\partial s_2}{\partial \theta}\\ = \left(\begin{array}{cc}\frac{\partial s_1}{\partial \theta} & 0 \\ 0 & \frac{\partial s_2}{\partial \theta}\end{array}\right) $$ so it's kind of a trivial application of the chain rule. It would be less trivial if $P$ depended on $s_1$ and $s_2$ in a more complicated way, e.g. $$ P = \left(\begin{array}{cc}s_1^2 & s_1 - s_2 \\ s_1s_2 & s_2\end{array}\right) $$
How to prove: $\left(\frac{2}{\sqrt{4-3\sqrt[4]{5}+2\sqrt[4]{25}-\sqrt[4]{125}}}-1\right)^{4}=5$?
Hint: Set $x=5^{1/4}$, then $$ \frac{2}{\sqrt{4-3x+2x^2-x^3}}-1=x $$ This equation simplifies to $$ x(x^4-5)=0 $$ The rest is clear.
Mean Independence of self-contained random variable
Let us first show that $X_t=E(X_t|\mathcal F_t)$ instead of $EX_t=E(X_t|\mathcal F_t)$. Note that $Y_t$ is $\mathcal F_t$ measurable . Hence $X_t=Y_t+E(X_t|\mathcal F_t)$ is also $\mathcal F_t$ measurable. This gives $X_t=E(X_t|\mathcal F_t)$. Now we get $Y_t=0$ so $\mathcal F_t$ is a trivial sigma field. (Every set in it has probability $0$ or $1$). This implies that any random variable, in particular $X_t$, is independent of $\mathcal F_t$ so we end up with $X_t=E(X_t|\mathcal F_t )=EX_t$.
A strange ring category
This is useful is you would like a Hopf algebra to be a group object in the category of algebras. If $A$ is a Hopf algebra, then the antipode map $S: A \to A$, is an antiendomorphism of $A$. You can see this from the example of group algebras: if $G$ is a group, and $k[G]$ is the group algebra, then $k[G]$ has a Hopf algebra structure where $S: g \mapsto g^{-1}$ and we have $(gh)^{-1} = h^{-1} g^{-1}$. I don't know that it's worth defining a category just to fix this issue. I have seen people spend far more time bothered by this issue than it is worth when learning Hopf algebras, so maybe this would have helped them.
A **proof** for $\sum_{i=0}^{t-2}{\frac{1}{t+3i}} \leq \frac{1}{2}$
Might be a bit overkill but well... Check it is true for $t < 27$ or so. Then assuming $t \geq 27\ (*)$ : $$ \sum_{i=0}^{t-2} \frac{1}{t+3i} = \frac{1}{t} + \sum_{i=1}^{t-2} \frac{1} {t+3i} \leq \frac{1}{t} + \int_0^{t-2} \frac{1}{t+3x} \ dx = \frac{1}{t} + \frac{1}{3} log(4 - 6/t) \leq \frac{1}{t} + \frac{1}{3}log(4) \overset{(*)}{\leq} \frac{1}{2} $$
How find all possible values of $a_{2015}$ for $a_2 = 5$, $a_{2014} = 2015$, and $a_n=a_{a_{n-1}}$?
This is a an exercise of USA Mathematical Talent Search (Academic Year 2014–2015) and the solutions can be found here
Is this log identity true?
You have to be careful with branches of $\ln$. The definition of $z^w$ for complex $z$ and $w$ is $z^w = \exp(w \ln(z))$ (for some branch of $\ln$). Therefore $\ln(z^w) = w \ln(z) + 2 \pi i n$ where $n$ is an integer. Which integer it is will depend on what branch of $\ln$ you want to use.
Does the closure of component set restricted to subspace equals to the closure of component set in the subspace?
The assertion is false. Take for example $X = \mathbb R, Y = [0, 2]$ and $A = [0, 1]$. Then $0 \in Y \cap Cl_X(X - A)$ but $0 \not \in Cl_Y(Y - A)$.
If $f$ is continuous on $[a,b]$, then $f$ is bounded on $[a,b]$. Have questions about the proof
I am pretty sure this proof is a valid proof. However, you seem to have confusing about the part after we prove $b$ is the least upper bound of the set and why the proof does not end there, so I will pick up the argument from there. We have that the least upper bound of the set $A=\{ x : x \in [a, b] \wedge f \ \text{is bounded on} \ [a, b]\}$ is $b$. This does not, however, automatically mean, $b \in A$. The least upper bound of the set does not necessarily lie inside the set. Take the following example: $$S=\{3, 3.1, 3.14, 3.141, 3.1415, 3.14159, ...\}$$ The above set slowly approaches $\pi$'s decimal expansion with rational numbers. Therefore, $\pi$ is the least upper bound of the set. However, $\pi \notin S$ because this is a set of rational numbers and $\pi$ is not rational. This might take a while to understand, so you might want to pause and think here if you are confused. Therefore, since $b$ being the least upper bound does not imply $b \in A$, we need to prove $b \in A$ using the fact that the $f$ is continuous. The proof does this well, but it's sort of ambiguous, so I'll rewrite it here to clarify: Since $b$ is the least upper bound of the set $A$, we can take any $\epsilon > 0$ to find a $\beta \in A$ such that $b-\epsilon < \beta < A$. Also, since $f$ is continuous, we know that $f$ is bounded on $[b-\epsilon, b]$ for some $\epsilon > 0$. Using this $\epsilon$, choose $\beta$ as described in the first sentence of this paragraph. We know that $f$ is bounded on $[a, \beta]$ because of the definition of $A$ and that $[b-\beta, b]$ because $[b-\beta, b] \subset [b-\epsilon, b]$ and the latter is bounded, so all of its subsets must be bounded. Thus, we can put $[a, \beta]$ and $[b-\beta, b]$ to find that $f$ is bounded on $[a, b]$.
Find a specific rectangle in an ellipse
Let $\dfrac{x²}{a²}+\dfrac{y²}{b²}=1$ be the equation of the ellipse. The outer rectangle has lenght and height $2a$ and $2b$ respectively. The inner rectangle upper right corner $A(x_A,y_A)$ is on the ellipse, hence $\dfrac{x_A^²}{a²}+\dfrac{y_A^²}{b²}=1$. The two rectangle are of similar proportions, hence $\dfrac{x_A}{y_A}=\dfrac{a}{b}$. By combining the two equations you get: $x_A^2=\dfrac{a²}{2}$ and $y_A^²=\dfrac{b²}{2}$ EDIT: another demonstration relies on the square. Between the square inscribed in a circle and the one outside the same circle, there is a relation of $\sqrt{2}$ between the sides (easy to see when considering that the diagonal of one is the side of the other). Then, rectangles are homothetic from squares and the results holds.
Can true surjection really exist for algebraic functions?
Check the definition of an Algebraic function from Wikipedia. An algebraic function is just a function which can be written as a root of a polynomial equation. It doesn't mean that an algebraic function only takes on values which are algebraic numbers. So, for example, $f(x)=x$ is a perfectly fine algebraic function which is also surjective.
Finding a unit vector perpendicular to another vector
Let $\vec{v}=x\vec{i}+y\vec{j}+z\vec{k}$, a perpendicular vector to yours. Their inner product (the dot product - $\vec{u}.\vec{v}$ ) should be equal to 0, therefore: $$8x+4y-6z=0 \tag{1}$$ Choose for example x,y and find z from equation 1. In order to make its length equal to 1, calculate $\|\vec{v}\|=\sqrt{x^2+y^2+z^2}$ and divide $\vec{v}$ with it. Your unit vector would be: $$\vec{u}=\frac{\vec{v}}{\|\vec{v}\|}$$
Sub-lattices and lattices.
The point of confusion is that a lattice can be described in two different ways. One way is to say that it is a poset such that finite meets and joins exist. Another way is to say that it is a set upon which two binary operations (called meet and join) are given that satisfy a short list of axioms. The two definitions are equivalent in the sense that using the first definition's finite meets and joins gives us the two binary operations, and the structure imposed by the second definition allows one to recover a poset structure, and these processes are inverse to each other. So now, if $L$ is a lattice and $S\subseteq L$ then $S$ is automatically a poset, indeed a subposet of $L$. But, even if with that poset structure it is a lattice it does not mean that it is a sublattice of $L$. To be a sublattice it must be that for all $x,y\in S$, the join $x\vee y$ computed in $S$ is the same as that computed in $L$, and similarly for the meet $x\wedge y$. This much stronger condition does not have to hold. Indeed, as noted by Gerry in the comment, the meet $\{1,2\}\wedge \{2,3\}$ computed in $\mathcal P({1,2,3})$ is $\{2\}$, while computed in the given subset it is $\emptyset$. None the less, it can immediately be verified that the given subset is a lattice since under the inclusion poset, all finite meets and joins exist.
Integrating exponentials and complex numbers: Why does this equality hold?
In general, for any constant $a$, $$\frac{d}{dx}e^{ax}=ae^{ax}.$$ Now, in $\frac{d}{dx}e^{\frac ihxp}$, $p$ is merely a constant (since we're differentiating with respect to $x$). Can you proceed from here?
Interesting or creative applications of Bezout’s (Bachet’s) Identity
Here’s one: Dickson proved that all primitive integral solutions of the equation $$x^2-my^2=zw$$ are given by \begin{align} z &= el^2 + 2flq + gq^2, \\ w &= en^2 - 2fnr + gr^2, \\ x &= \pm (eln+fnq-flr-gqr), \\ y &= lr + nq, \end{align} where $f^2-eg=m$. Dickson says “Since $y=l$ and $w=g$ are relatively prime, there exist integers $f$ and $q$ such that $x=fl+gq$”, and then goes on to prove the theorem. [@BillDubuque: If you stumble back on this post, I’d love for you to discuss it with respect to your comments under the main post. Thanks!]
Is maximum of sum exists if each function has a maximum?
Let $D$ be a bounded simply connected open set, and $\vec{v}, \vec{w}$ be any two points in $D$. Then define $h$ as $$h:D\rightarrow \mathbb{R}:\vec{x}\mapsto \|\vec{x}-\vec{v}\|\cdot\|\vec{x}-\vec{w}\|$$ This function reaches a maximum on $\overline{D}$. If the set $D$ is sufficiently large, this maximum will be achieved on the boundary, rather than in the local extremum $\frac{\vec{v}+\vec{w}}{2}$. Let this maximal value be $M$, then define $f$ and $g$ in the following way. $$f(\vec{x}) = \begin{cases} h(\vec{x}) &\text{if } \vec{x}\neq\vec{v}\\ M &\text{if } \vec{x} = \vec{v} \end{cases}$$ And similarly for $g$ and $\vec{w}$. Then the functions $f$ and $g$ will achieve their maximal values in $\vec{v}$ respectively $\vec{w}$, but the sum $f+g$ has a limit point in $\overline{D}$ where the limit is equal to $2M$, which is not achieved in any of the interior points of $D$. This answer the first question negatively. To figure out a criterion in which the maximum is achieved, first the open set needs to be simply connected. Otherwise any $f$ and $g$ that satisfy the condition on $D$ where $f+g$ reaches its maximum in $\vec{x}$, will fail to achieve a maximum on $D\setminus \{\vec{x}\}$. Any kind of differentiability criterion also doesn't work. The example is discontinuous, but by replacing the discontinuity with a sufficiently small peak of the form $\frac{M}{c\|\vec{x}-\vec{v}\|^2+1}$ for some large positive value of $c$ you can make even an analytic counterexample. $$ \begin{align} f &= h + \frac{M}{c\|\vec{x}-\vec{v}\|^2+1}\\ g &= h + \frac{M}{c\|\vec{x}-\vec{w}\|^2+1} \end{align} $$ You can require both $f$ and $g$ to be concave and the set $D$ to be convex, which implies $f+g$ is concave as well, and the function $f+g$ reaches its maximum on an interior point of $D$. This seems to be the simplest criterion, albeit a very restrictive one.
Second principal curvature of a surface of revolution at z=0
I figured this out a while ago and thought I'd share my solution in case someone stumbles across this in future.. For the case in which we have a blunt leading edge, Let \begin{equation} \kappa_1 = \frac{1}{R_1} \qquad \mathrm{and} \qquad \kappa_2 = \frac{1}{R_2}. \end{equation} $\therefore$ \begin{equation} z = \frac{cos\theta}{\kappa_2} \qquad[1] \end{equation} Near the leading edge, \begin{equation}\theta \rightarrow \frac{\pi}{2}, \quad \mathrm{so} \quad \frac{\pi}{2} - \theta \rightarrow \epsilon \,\mathrm{(small)} \end{equation} $\therefore$ \begin{equation} cos\theta \rightarrow \sin \epsilon \rightarrow \epsilon \qquad [2] \end{equation} Express $\epsilon$ as an expansion in $s$: \begin{equation} \epsilon=\frac{\partial \epsilon}{\partial s}s + \frac{1}{2!}\frac{\partial^2 \epsilon}{\partial s^2}s^2 + \frac{1}{3!}\frac{\partial^3 \epsilon}{\partial s^3}s^3 + ... \end{equation} As $s \rightarrow 0$ we can write, \begin{equation} \epsilon \approx \frac{\partial \epsilon}{\partial s}s \qquad [3] \qquad \mathrm{and} \qquad z \rightarrow s \qquad [4] \end{equation} And \begin{equation} \frac{\partial \epsilon}{\partial s} = -\frac{\partial \theta}{\partial s} = \kappa_1 \qquad [5] \end{equation} Finally, substituting [2], [3], [4] and [5] into [1], \begin{equation} s \approx \frac{\kappa_1 s}{\kappa_2} \end{equation} $\therefore$ \begin{equation} \kappa_1 \approx \kappa_2 \end{equation} \begin{equation} Q.E.D. \end{equation}
An alternative proof of an application of Hahn-Banach
The problem with your proof is that linear functionals defined by specifying their value on a Hamel basis have no reason to be bounded in general. For example, in this case, if you extend $\{x_0\}$ to a Hamel basis $\{x_0\} \cup \{x_i: i \in \Lambda\}$ then there could e.g. be linear combinations of the type $x_0 + \sum_{i \in I} \lambda_i x_i$ with very small norm (where $I \subseteq \Lambda$ is finite). This is a problem since $f(x_0 + \sum_{i \in I} \lambda_i x_i) = f(x_0) = 1$ and so $\|f\| \geq \|x_0 + \sum_{i \in I} \lambda_i x_i\|^{-1}$ which gets very large as $\|x_0 + \sum_{i \in I} \lambda_i x_i\|$ gets small.
Accumulation points of a countable set in [0,1]
HINT: Take $A=\Bbb Q\cap[0,1]$.
comparison of the limits of the expectations of two sequences of r.v.
The conclusion does not hold under the mentioned assumption because first there is no guarantee that $Y_n$ is integrable. For example, if $Z$ is a non-integrable non-negative random variable, let $Y_n:=X_n+Z\cdot \mathbf 1\left\{Z\gt n\right\}$. For each $\omega$, if $n\geqslant Z\left(\omega\right)$, we have $X_n\left(\omega\right)=Y_n\left(\omega\right)$ but $\mathbb E\left[Y_n\right]$ is infinite for all $n$. Second, assuming $Y_n$ integrable for all $n$ is not enough. Indeed, let $Y_n:=X_n+n^2\cdot \mathbf 1\left\{Z\gt n\right\}$ where $Z$ is such that $n\mathbb P\left\{Z\gt n\right\}\gt 1$ for all $n$. Then for each $\omega$, if $n\geqslant Z\left(\omega\right)$, we have $X_n\left(\omega\right)=Y_n\left(\omega\right)$ but $\mathbb E\left[Y_n\right]=\mathbb E\left[X_n\right]+n^2\mathbb P\left\{Z\gt n\right\}$ which goes to infinity as $n$ goes to infinity. However, if we assume that $\mathbb E\left[Y_n\mathbf 1\left\{Y_n\gt X_n\right\}\right]\to 0$, then the wanted conclusion holds as a consequence of the decomposition $$ \mathbb E\left[Y_n\right]=\mathbb E\left[Y_n\mathbf 1\left\{Y_n\leqslant X_n\right\}\right]+\mathbb E\left[Y_n\mathbf 1\left\{Y_n\gt X_n\right\}\right] $$ because the first term of the right hand side do not exceed $\mathbb E\left[X_n\right]$. For example, if the sequence $\left(Y_n\mathbf 1\left\{Y_n\gt X_n\right\}\right)_{n\geqslant 1}$ is uniformly integrable, then $\mathbb E\left[Y_n\mathbf 1\left\{Y_n\gt X_n\right\}\right]\to 0$ (there is a pointwise convergence to $0$).
Confusion in conformal mapping
If $z \in \mathbb C - (-\infty,-1] \cup [1,\infty)$ you first map it to a point of $\mathbb C - [-1,1]$ using the inverse of $\frac 1 z$ (which is $\frac 1 z$ itself!) and then apply the given function on $\mathbb C - [-1,1]$. So what you get $\frac {(\frac 1 z )+1} {(\frac 1 z )-1}$.
limits and summation
If $f:[a,b]\to \mathbb{R}$ be continous, then $$\displaystyle \lim_{n\to\infty} \frac{b-a}{n}\sum_{i=1}^n f\left(a+\frac{b-a}{n}i\right) = \int_a ^b f(x)dx$$ We can take $f(x)=\tan x$, $a=0-$, $b=\pi/3$. So this sum is equal to $\displaystyle \int_0^{\pi/3} \tan x dx.$
Why can we assume WLOG that $x$ is zero?
You’ve not really given us enough information to be able to answer the question with certainty. However, if the property of three-point systems that was to be proved is one that depends only on the relative positions of the points and not on their absolute positions in the Cartesian plane, then you can set the origin of your Cartesian coordinate system anywhere in the plane determined by the three points without affecting whether or not the points have the property. In particular, you can set the origin at one of the points, and doing so may make the calculations simpler. In some other context it might be more convenient to set the origin at the incentre, circumcentre, centroid, or orthocentre of the triangle determined by the three points.
How many ways to color a graph with 10 colors
Start by counting the number of ways to color the triangle. If you have $n$ colors available, you have $n$ choices for the $4$-way vertex, $n-1$ for the one above it, and $n-2$ choices for the third. Think about how many choices you have for each of the others. You will get a $9^{\text{th}}$ degree polynomial in $n$.
How should I be thinking about the total derivative at a point?
I wonder how I should interpret the expression $[Dg(\chi)\cdot Df(\chi)](v)$ I think it's best to not try to interpret it at all. It's a unnatural expression without an intrinsic meaning. Multiplication of matrices is composition of linear operators. The linear operators $Df(\chi)$ and $Dg(\chi)$ are both defined in the same copy of $\mathbb R^n$ (the domain of $f$ and $g$) and go from there to the second copy of $\mathbb R^n$, in which $f$ and $g$ take values. Their composition does not make real sense (although it's formally defined as a product of matrices). On the other hand, if we are in the situation of the chain rule - the domain of $g$ contains the range of $f$ - then the product of $Dg(f(\chi))$ with $Df(\chi)$ makes sense: it's the composition of linear operators. We have three copies of $\mathbb R^n$ in this situation: $f$ goes from first to second, $g$ from second to third. It is important to keep in mind the distinction between different copies of $\mathbb R^n$. Doing calculus on manifolds enforces this distinction: if $f$ and $g$ are smooth maps from manifold $M$ to manifold $N$, there is no way to multiply $Df$ and $Dg$. On the third hand, if we are talking about maps between Riemannian manifolds (which include $\mathbb R^n$ with their standard metric), then one can take the scalar product of $Df(\chi)$ and $Dg(\chi)$. It is usually defined as $\langle A,B \rangle =\operatorname{tr}( B^TA)$ where $A$ and $B$ are matrices (such as $Df(\chi)$ and $Dg(\chi)$). This product generalizes $f'(x)g'(x)$ from one-dimensional analysis.
Complex line integrals in increasing directions
You have two options here. Without giving away too much, I'll describe both of them. One is to parametrerize the contour with something like $x = t$, $y = 1-t$ for $t \in (-\infty,\infty)$. Then, $z = x+iy = t+i(1-t)$, $dz = (1-i)dt$, and you can evaluate the contour integral as a simple one variable integral. The other option is to let $\Gamma_R$ be a contour consisting of a straight line $L_R$ from $z = -R+i(1+R)$ to $z = R+i(1-R)$, and then a counterclockwise circular arc $C_R$ from $z = R+i(1-R)$ back to $z = -R+i(1+R)$. (To specify a precise curve, we should say that the center of this circular arc is $z = i$, but that doesn't matter too much.) By the residue theorem, $\displaystyle\int_{L_R}\dfrac{dz}{z^2+4} + \displaystyle\int_{C_R}\dfrac{dz}{z^2+4} = \displaystyle\oint_{\Gamma_R}\dfrac{dz}{z^2+4} = 2\pi i \text{Res}\left(\dfrac{1}{z^2+4};2i\right)$ as long as $R$ is large enough to enclose the pole at $z = 2i$. You can show that as $R \to \infty$, the integral $\displaystyle\int_{C_R}\dfrac{dz}{z^2+4}$ tends to $0$, while the integral $\displaystyle\int_{L_R}\dfrac{dz}{z^2+4}$ tends to the line integral $\displaystyle\int\dfrac{dz}{z^2+4}$ along the infinite line $x+y = 1$. Thus, the answer is simply $2\pi i \text{Res}\left(\dfrac{1}{z^2+4};2i\right)$, which you can easily compute. Let me know if you need more information about either method.
Fourier Transform of Sine
The short answer is that the $2 \pi$ comes from the inversion formula. Here is an informal perspective which gives as hint as to how this can be made more rigorous: A distribution is defined by its action on a space of nicely behaved test functions. For a function $f$ from a restricted class of ordinary functions, we can define a distribution $T_f$ by $T_f(\phi) = \int f \phi$. For the function $t \mapsto 1$, we get $T_1(\phi) = \int \phi$. The '$\delta$ function' is defined by the distribution $T_\delta(\phi) = \phi(0)$. That is, it takes a test function $\phi$ and returns its value at $0$. The Fourier transform of a distribution is defined by $\hat{T_f}(\phi) = T_f(\hat{\phi})$, where $\hat{\phi}$ is the ordinary Fourier transform of $\phi$. We see that ${\hat{T}_1}(\phi) = T_1(\hat{\phi}) = \int \hat{\phi}$. The standard inversion formula shows that $\int \hat{\phi} = 2 \pi \phi(0)$, which gives ${\hat{T}_1}(\phi) = 2 \pi T_\delta(\phi)$, or more succinctly, ${\hat{T}_1} = 2 \pi T_\delta$. The same sort of analysis shows that ${\hat{T}_{t \mapsto e^{iat}}}(\phi) = 2 \pi \phi(a) = 2 \pi T_\delta(\omega \mapsto \phi(\omega+a))$. The last expression may be written informally as $T_{ \omega \mapsto 2 \pi \delta(\omega-a) } (\phi)$, which is the desired result.
Is this sum lesser or equal to 2?
(To those who originally upvoted: I've changed my answer because this is a much sleeker version, not sure why I didn't see it before.) This is just a single inequality added to itself with $c,d$ instead of $a,b$ in the second copy. Namely, $$a(1-b)+(1-a)b\le a(1)+(1-a)(1)\le1.$$ Above we used the fact that if $b\in[0,1]$, then $b\le1$ and $1-b\le1$.
Steps to finding a generalised eigenspace, given a linear transformation.
If you know $\lambda$ is an eigenvalue, you can check the nullspaces of $T - \lambda I, (T-\lambda I)^2, (T - \lambda I)^3, \ldots$ to get the generalized eigenvectors. You can stop once you find that the nullspaces of $(T-\lambda I)^k$ and $(T-\lambda I)^{k+1}$ are the same for some $k$.
Simplifying $\sum_{k=0}^b{n+k \choose n+k-a}$
$$S=\sum_{k=0}^{b} {n+k \choose n+k-a}=\sum_{k=0}^{b} {n+k \choose a}=[x^a]\sum_{k=0}^{b} (1+x)^{n+k}$$ $$S=[x^a] (1+x)^n \frac{(1+x)^{b+1}-1}{1+x-1}=[x^{a+1}]~((1+x)^{n+b+1}-(1+x)^n).$$ $$S={n+b+1 \choose a+1}-{n \choose 1+a}.$$ Here $[x^k]g(x)$ means the coefficient of $x^k$ in $g(x).$
If $F(x) = \int_{1}^{x^2} (\sqrt{1 + u}) du$, find $F'(x)$
Chain Rule: $\dfrac{d}{d(x^{2})}\displaystyle\int_{1}^{x^{2}}\sqrt{1+u}du\cdot\dfrac{d(x^{2})}{dx}$.
how to calculate the result of Quaternion Rotation?
Without checking for more errors, I am guessing it stems from this: $$ [-v\cdot\boldsymbol{p}, \omega\boldsymbol{p}+\boldsymbol{p} \times v][\omega, -v]\label{2} $$ which would have introduced a sign error. It should be $$ [-v\cdot\boldsymbol{p}, \omega\boldsymbol{p}+v \times \boldsymbol{p} ][\omega, -v]$$ Update: actually it appears you make this type of error several times. I'm not sure if it is a different convention you are using, but the standard one I'm using is the other way around. Crunching the numbers below. The following revision seems to correct the problems I was detecting $$\begin{align} p' &=[\omega, v][0,\boldsymbol{p}][\omega, -v] \tag{1}\\ &= [-v\cdot\boldsymbol{p}, \omega\boldsymbol{p}+v\times\boldsymbol{p} ][\omega, -v] \tag{2}\\ & = [-(v\cdot\boldsymbol{p})\omega+(\omega\boldsymbol{p}+ v\times \boldsymbol{p} )\cdot(v),(v\cdot\boldsymbol{p})(v)+\omega(\omega\boldsymbol{p}+v\times \boldsymbol{p})-(\omega\boldsymbol{p}+v\times \boldsymbol{p} )\times v] \tag{3}\\ & = [0,(v\cdot\boldsymbol{p})v+\omega^{2}\boldsymbol{p}+\omega(v\times\boldsymbol{p})-\omega(\boldsymbol{p}\times v)-(v\times\boldsymbol{p})\times v] \tag{4}\\ & = [0,(v\cdot\boldsymbol{p})v+\omega^{2}\boldsymbol{p}+2\omega(v \times \boldsymbol{p})+v\times(v \times \boldsymbol{p})] \tag{5}\\ & = [0,v\times(v \times \boldsymbol{p})+\left \| v \right \|^{2}\boldsymbol{p}+\omega^{2}\boldsymbol{p}+2\omega(v \times \boldsymbol{p})+v\times(v \times \boldsymbol{p})] \tag{6}\\ & = [0,\boldsymbol{p}+2\omega(v \times \boldsymbol{p})+2(v\times(v \times \boldsymbol{p}))]\tag{7} \end{align}$$
Prove following points lie on a circle.
Let $\vec{a} = \vec{DA}, \vec{b} = \vec{DB}, \vec{c} = \vec{DC}$ and $a,b,c$ be corresponding magnitudes. Let us look at what happens on the plane holding circle $ABD$. Let $X$ be the circle's center and $E'$ be the mid point of the shorter arc $AB$. It is not hard to see $$\angle E'DB = \frac12 \angle E'XB = \frac12 \angle AXE' = \angle ADE'$$ This implies $DE'$ is the angular bisector of $\angle ADB$ and $\vec{DE'}$ is pointing along the direction $\frac{\vec{a}}{a} + \frac{\vec{b}}{b}$. Since $DE$ is perpendicular to $DE'$, $\vec{DE}$ is pointing along the direction $\frac{\vec{a}}{a} - \frac{\vec{b}}{b}$. To proceed, we will re-express this fact in terms of barycentric coordinates. For any $P \in \mathbb{R}^3$, the barycentric coorindates of $P$ with respect to tetrahedron $ABCD$ is a 4-tuple $(\alpha_P, \beta_P, \gamma_P, \delta_P)$ which satisfies: $$\alpha_P + \beta_P + \gamma_P + \delta_P = 1\quad\text{ and }\quad \vec{P} = \alpha_P \vec{A} + \beta_P \vec{B} + \gamma_P \vec{C} + \delta_P\vec{D}$$ In particular, the barycentric coordinates for $D$ is $(0,0,0,1)$. Let's look at point $E$. Since $E$ lies on the plane holding $ABD$, $\gamma_E = 0$. Since $DE$ is pointing along the direction $\frac{\vec{a}}{a} - \frac{\vec{b}}{b}$, we find $\alpha_E : \beta_E = \frac1a : -\frac1b$. From this, we can deduce there is a $\lambda_E$ such that $$(\alpha_E, \beta_E, \gamma_E, \delta_E) = \left(\frac{\lambda_E}{a}, -\frac{\lambda_E}{b}, 0, 1 + \lambda_E\frac{a - b}{ab}\right)$$ By a similar argument, we can find $\lambda_F$ and $\lambda_G$ such that $$(\alpha_F, \beta_F, \gamma_F, \delta_F) = \left(0,\frac{\lambda_F}{b}, -\frac{\lambda_F}{c}, 1 + \lambda_F\frac{b - c}{bc}\right)\\ (\alpha_G, \beta_G, \gamma_G, \delta_G) = \left(-\frac{\lambda_G}{a},0,\frac{\lambda_G}{c}, 1 + \lambda_G\frac{c - a}{ca}\right)$$ In terms of barycentric coordinates, $D,E,F,G$ are coplanar when and only when following determinant evaluates to zero. $$\mathcal{D}\stackrel{def}{=}\left|\begin{matrix} \alpha_E & \beta_E & \gamma_E & \delta_E\\ \alpha_F & \beta_F & \gamma_F & \delta_F\\ \alpha_G & \beta_G & \gamma_G & \delta_G\\ \alpha_D & \beta_D & \gamma_D & \delta_D \end{matrix}\right| = \left|\begin{matrix} \alpha_E & \beta_E & \gamma_E & \delta_E\\ \alpha_F & \beta_F & \gamma_F & \delta_F\\ \alpha_G & \beta_G & \gamma_G & \delta_G\\ 0 & 0 & 0 & 1 \end{matrix}\right| = \left|\begin{matrix} \alpha_E & \beta_E & \gamma_E\\ \alpha_F & \beta_F & \gamma_F\\ \alpha_G & \beta_G & \gamma_G\\ \end{matrix}\right| $$ Substitute above expression of barycentric coordinates of $E,F,G$ into last determinant, we find $$\mathcal{D} = \lambda_E\lambda_F\lambda_G \left| \begin{matrix} \frac1a & -\frac1b & 0\\ 0 & \frac1b & -\frac1c\\ -\frac1a & 0 &\frac1c \end{matrix} \right| = 0 $$ as the rows of determinant on RHS sum to zero. From this, we can conclude $D, E, F, G$ are coplanar. Since $D, E, F, G$ lie on the intersection of a sphere and a plane, they lie on a circle.
Isolated Points of Countable H-Closed Spaces
$\newcommand{\cl}{\operatorname{cl}}$First show that any countable $H$-closed space $X$ has an isolated point. Suppose that $X$ has no isolated points; we will get a contradiction by producing an open filter base with no cluster point. ($X$ is $H$-closed iff every open filter on $X$ has a cluster point.) The construction is recursive. Let $X=\{x_n:n\in\Bbb N\}$. Let $y_0\in X\setminus\{x_0\}$; there are disjoint open sets $U_0$ and $V_0$ such that $x_0\in U_0$ and $y_0\in V_0$. $V_0$ is infinite, since $X$ has no isolated points, so there is a point $y_1\in V_0\setminus\{x_1\}$, and there are disjoint open sets $U_1$ and $V_1$ such that $x_1\in U_1$ and $y_1\in V_1\subseteq V_0$. Given $x_k,y_k,U_k$, and $V_k$ for $k\le n$, choose $y_{n+1}\in V_n\setminus\{x_{n+1}\}$, and let $U_{n+1}$ and $V_{n+1}$ be disjoint open sets such that $x_{n+1}\in U_{n+1}$, and $y_{n+1}\in V_{n+1}\subseteq V_n$. Now verify that the filter generated by the filter base $\{V_n:n\in\Bbb N\}$ has no cluster point in $X$; this is where you’ll use the sets $U_n$. Now let $D$ be the set of isolated points of $X$, and if $\cl D\ne X$, let $G=X\setminus\cl D$. $G$ has no isolated points, so the previous construction can be carried out entirely in $G$ if we choose $y_0\in G\setminus\{x_0\}$ and $V_0\subseteq G$ at the start, and we get the same contradiction.
If $G$ is a super edge magic graph with $p$ vertices and $q$ edges then $\displaystyle \sum_{v \in V(G)}f(v)deg(v) $ equals some expression
For each edge $uv \in E(G)$, add $f(u)$ and $f(v)$. When we do this for every edge, it becomes $$\sum_{uv \in E(G)} f(u) + f(v) = \sum_{v \in V(G)} f(v) \deg (v)$$ because $f(v)$ gets added $\deg (v)$ times. But by definition, $$\sum_{uv \in E(G)} f(u) + f(v) = \operatorname{sum} (S) ,$$ and $S$ is a set of $q$ consecutive integers, starting at $s$. Therefore, $$\operatorname{sum}(S) = \sum_{k=s}^{s+q-1} k = qs + \sum_{k=0}^{q-1} k = qs + \binom{q}{2}.$$
Relation between Pontryagin number and Euler number for a four-dimensional closed manifold
Let $M$ be a connected, closed, smooth, oriented four-dimensional manifold. By the Hirzebruch signature theorem, the first Pontryagin number is $$p_1(M) = \langle p_1(TM), [M]\rangle = 3\tau(M)$$ where $\tau(M) = b^+(M) - b^-(M)$ is the signature of $M$. On the other hand, \begin{align*} \chi(M) &= b_0(M) - b_1(M) + b_2(M) - b_3(M) + b_4(M)\\ &= 1 - b_1(M) + b^+(M) + b^-(M) - b_1(M) + 1\\ &= 2 - 2b_1(M) + b^+(M) + b^-(M). \end{align*} Now note that $p_1(M) = 3\tau(M) \equiv b^+(M) + b^-(M) \bmod 2$ and $\chi(M) \equiv b^+(M) + b^-(M) \bmod 2$, so $p_1(M)$ and $\chi(M)$ have the same parity, i.e. they are either both even, or both odd. Aside from the fact that $p_1(M)$ and $\chi(M)$ have the same parity, and that $p_1(M)$ is divisible by three, there are no more restrictions on these two quantities - in particular, no further relations between them. To see this, it is enough to exhibit, for any integers $a$ and $b$ of the same parity, a connected, closed, smooth, oriented four-manifold $M$ with $p_1(M) = 3a$ and $\chi(M) = b$ - note, $3a$ has the same parity as $a$, so $3a$ and $b$ have the same parity. First observe that $$p_1(T^4\#k\mathbb{CP}^2\# l\overline{\mathbb{CP}^2}) = 3\tau(T^4\#k\mathbb{CP}^2\# l\overline{\mathbb{CP}^2}) = 3(k-l)$$ and $$\chi(T^4\#k\mathbb{CP}^2\# l\overline{\mathbb{CP}^2}) = k + l.$$ Taking $k = \frac{1}{2}(a + b)$ and $l = \frac{1}{2}(b - a)$, which are integers because $a$ and $b$ have the same parity, we see that $M = T^4\#k\mathbb{CP}^2\# l\overline{\mathbb{CP}^2}$ satisfies $p_1(M) = 3a$ and $\chi(M) = b$.
Functional equation $f(x+f(x+y))=f(x-y)+f(x)^2 \quad \forall x,y\in \mathbb R$
For a starter: We denote $a=f(0)$, $b=f(a)$. Substituting $y=-x$ into the original equation gives $$f(x+a)=f(2x)+f(x)^2\text.\tag1\label{1a}$$ Substituting $x=a$ into \eqref{1a}, we have $b=0$. Again, substituting $x=0$ into \eqref{1a}, we have $b=f(0)+f(0)^2$. This shows $f(0)+f(0)^2=0$, hence $f(0)=0$ or $-1$. You can proceed from here.
Cochains: terminology
If $M$ is a $\Bbb Z$-module, then $\text{Hom}(M,\Bbb R)\cong\text{Hom}(M,\Bbb Z)\otimes \mathbb R$. Integrating any $k$-form over a (smooth) chain gives a practical such "real" cochain.
Alternating series problem
The Alternating Series Test says that $$ \sum_{k=0}^\infty (-1)^ka_k $$ converges as long as $a_{k+1}\le a_k$ and $\lim\limits_{k\to\infty}a_k=0$. Comparing to the Leibniz Series, your series converges to $2-\pi/2$.
Calculating area of sphere with constraint on zenith
The problem seems to be what you mean by "the zenith of points of both $\mathbb S_R$ and the unit sphere translated up $R$ units at points which are of the same height." The quantity $R \sin \alpha$ is the distance from the $z$ axis of points whose spherical coordinates are $(R, \theta, \alpha)$ for any $\theta,$ that is, the points on the sphere $\mathbb S_R$ that are at a zenith angle of $\alpha$ relative to the center of that sphere. So far, this makes sense. The quantity $\sin \alpha$ is the distance from the $z$ axis of points on the unit sphere about the origin at a zenith angle of $\alpha.$ This does not change when you translate that sphere upward $R$ units, so it's unclear why you would add $R.$ In fact, since $0 < \sin\alpha \leq 1$ whenver $0 < \alpha < \pi,$ for any such $\alpha$ we would find that $R \sin\alpha \leq R < R + \sin\alpha,$ so it is impossible to have $R \sin\alpha = R + \sin\alpha.$ And that equation is just as impossible for $\alpha = 0$ or $\alpha = \pi$ as long as $R > 0.$ In short, the equation you proposed just does not make sense. Now there may be some sense in equating the heights ($z$ coordinates) of points at the intersection of $\mathbb S_R$ and the unit sphere around $(0,0,R).$ For that, we use the cosine of the azimuth angle relative to each sphere, and we note that the azimuth angle of these points relative to the sphere around $(0,0,R)$ is greater than their azimuth angle relative to $\mathbb S_R.$ There is a relationship between the two angles, but it's not what your equation says. As for the claim that $2R^2-2R^2\cos\alpha=1,$ which the other student presumably did not explain, let $P$ be a point where the two spheres intersect. Then the points $(0,0),$ $(0,R),$ and $P$ form an isoceles triangle with two legs of length $R$ (each a radius of $\mathbb S_R$) and a base of $1$ (the segment from $(0,R)$ to $P,$ which is a radius of the unit sphere around $(0,R).$) If we drop a perpendicular from either $P$ or $(0,R)$ to the opposite leg, we divide the isoceles triangle into two right triangles, one with hypotenuse $R$ and angle $\alpha$ and one with hypotenuse $1$ and angle $\frac\alpha2.$ If we drop a perpendicular from $(0,0)$ to the base of the isoceles triangle, we get two congruent right triangles with hypotenuse $R$ and angle $\frac\alpha2,$ with leg $\frac12$ opposite the angle $\frac\alpha2.$ In either case we end up having to deal with a right triangle with angle $\frac\alpha2.$ Let's try working with the second pair of right triangles. The construction tells us that $R \sin\frac\alpha2 = \frac12.$ Square both sides and multiply by $2$: $$2R^2 \sin^2\frac\alpha2 = 2\left(\frac14\right) = \frac12.$$ Use the trigonometry identity $2 \sin^2\frac\alpha2 = 1 - \cos\alpha$ (which you can get from either the half-angle sine formula or the double-angle cosine formula) to substitute $1 -\cos\alpha$ for $2 \sin^2\frac\alpha2$: $$R^2(1 - \cos\alpha) = \frac12.$$ You can then distribute the $R^2$ over $1-\cos\alpha$ and multiply both sides by $2$ to get the other student's formula if you want, although I think the formula above is more convenient for just plugging into your known area formula.
Minimum size of test given that power function is at least 0.6 in p = 0.3
Out of the four quantities significance level $\alpha,$ power, sample size, and difference $\Delta$ detected, a power computation typically specifies three and finds the remaining one. The following power curve from Minitab uses $\alpha = 0.05, n = 25, \Delta = .4 -.3 = .1,$ and shows power for a range of 'comparison' values (.3 above). With $\alpha = 0.05,$ the rejection region is too small to get 0.6 power against $\mu_a = .3.$ So you need larger $\alpha.$ What critical value $c$ gives power 0.6? What $\alpha$ is implied by that? Maybe this will help you clarify a productive approach to your Question.
why the uniform norm in C[a,b] is invariant under linear transformation?
You map $C[a,b]$ onto $C[c,d]$ by $Tf(x) = f(\tau(x))$ where $\tau$ is a homeomorphism of $[c,d]$ onto $[a,b]$. An affine map $\tau$ will do. Then it's obvious that the same values are taken by $f$ and $Tf$, and thus that the supremum norms are the same.
Existence of weak Schauder-basis for concrete example.
The answer to the general question is no. It is a result of Banach that a basis for the weak topology is also a basis for the norm topology. Now take a reflexive space $X$ without the approximation property (for instance, any subspace of $\ell_p$ where $p\neq 1,2,\infty$ which lacks the approximation property). The ball $B$ of $X$ is weakly* (=weakly) compact. It cannot have the property you want as otherwise $X$ would have a basis. As for $P(X)$, have you tried Dirac deltas corresponding to points from a countable dense subset of $X$?
Factor pairs problem
Let $a, b, c, d, N \in \mathbb{Z}\setminus\{0\}$ such that $N = ab = cd$. Suppose that $(a+1)(b+1) = (c+1)(d+1)$. Expanding $(a+1)(b+1) = (c+1)(d+1)$ we get that $a+b = c + d$. Now substituting $b = \frac{N}{a}$ and $d = \frac{N}{c}$ we get $a+\frac{N}{a} = c + \frac{N}{c}$. Rearranging we have $(a-c)(ac-N)=0$ and using that $N =ab$ we get $(a-c)(c-b) = 0$. Hence the only solution is $\{a, b\} = \{c, d\}$.
When Killing form equals a constant times the trace
Because $L$ is simple, any two non-degenerate symmetric invariant bilinear forms are scalar multiples of each other, over an algebraically closed field of characteristic zero. The proof follows from Schur's Lemma (compare with this question). Since for simple Lie algebras the Killing form and the trace form are both non-degenerate, there is a unique $a\neq 0$ such that $ \kappa(x,y)=a\cdot tr(xy)$ for all $x,y\in L$. Actually, the unique value $a$ for each classical Lie algebra $L$ is explicitly known, and listed here. One can also directly prove these formulas, e.g., that $\kappa(x,y)=2ntr(xy)$ for all $x,y \in \mathfrak{sl}_n(\mathbb{C})$, thereby proving uniqueness of the scalar again. Reference: This homework, exercise $4$.
Most General Definition of Continuity?
The intuitive meaning of continuity is if a function carries $x$ to $y$, then it should carry anything close to $x$ to something close to $y$. One can generalize the concept of function here, but I don't think that is why you are after. The only other thing to mess with is the definition of "close". Point-set topology pushes this concept pretty much to its limit. It allows you to define closeness just by specifying set of neighbors for each point, with the only requirement being that the intersection of two neighborhoods should itself be a neighborhood. That intersection property pretty much embodies the concept of closeness. The neighborhoods may contain things extending far away from the point, but in order to be a neighborhood, they should contain everything sufficiently near the point, so everything sufficiently near should be in both - i.e., in their intersection. Any concept of closeness should be expressible by such sets, so there isn't really any more leeway to generalize it. Any time you find something that can reasonably be considered "continuity", you will find it is expressible in terms of topologies - even if defined otherwise. For example, the "Scott continuity" suggested by John Forkosh is defined by the requirement that the function preserve directed suprema, not limits. But then you find that you can define the Scott topology, and then Scott-continuity is just continuity under the Scott topology.
Elliptic Regularity Theorem
"$C^2$-existence theorem" is false, even if $L$ is the Laplacian. This is in Gilbarg & Trudinger, problem 4.9. Recently discussed here, where a reference to an older thread on this topic is found. "$C^2$-elliptic regularity" is also false. For example, the harmonic extension of a $C^2$-smooth function on the boundary of the unit disk $\mathbb D$ is not necessarily in $C^2(\overline{\mathbb D})$. This is discussed in Chapter II of Harmonic Measure by Garnett and Marshall, although for 1st derivatives instead of 2nd. Below I take their example as a starting point. Consider a conformal map $f:\mathbb D\to \Omega$ where $\Omega=\{x+iy:0<x<\frac{1}{1+|y|}\}$. Clearly, $f$ is unbounded. On the other hand, $\operatorname{Re} f$ has a continuous extension to $\overline{\mathbb D}$ because it has a finite limit even at the points which are mapped by $f$ to infinity. Write $f(z)=\sum_{n=0}^\infty c_n z^n$ and define $F(z)=\sum_{n=1}^\infty c_n n^{-2}z^n$. I claim that $\operatorname{Re} F$ is the counterexample. Indeed: $\operatorname{Re} F$ is $C^2$ smooth on the boundary, because writing $z=e^{it}$ and differentiating in $t$ twice, we get $-\operatorname{Re} f+\operatorname{Re} f(0)$. (To be more rigorous, we can integrate the latter twice to get the former.) If all second-order partials of $\operatorname{Re} F$ were bounded in $\mathbb D$, then $F''$ would be bounded. But this is impossible because $z(zF')'=f(z)-f(0)$, which is unbounded.
Find the indefinite integral of $1/(16x^2+20x+35)$
You want to complete a square. So, remember that $$ (x+\alpha)^2 = x^2 + 2 \alpha x + \alpha^2. $$ You have $$ 2\alpha = \frac{20}{16}, $$ i.e. $\alpha = 5/8$. Hence $$ x^2 +\frac{20}{16}x = \left( x + \frac{5}{8} \right)^2 - \frac{25}{64}. $$ Can you go further, now?
Visualizing a volume with MATLAB
Finally, I've accomplished my goal by individually surf-plotting each face of the transformed cube: After looking at the docs of surf I found sphere(n) to produce an input for surf which would render the unit sphere and looked into it: surf(X,Y,Z) where all are matrices of the same size produces the surface with these points preserving the rectangular topology. This leads to the following code: [e1x, e1y] = meshgrid(tx,ty); [e2x, e2z] = meshgrid(tx,tz); [e3y, e3z] = meshgrid(ty,tz); % These are the three different grids needed for the cube faces e1x = e1x(:); (...) % Make all of them columns e1z = ones(size(e1x)); (...) % The respective third dimensions will be constant 0 or 1 bottom = [ e1x e1y 0*e1z]'; top = [ e1x e1y 1*e1z]'; front = [ e2x 0*e2y e2z]'; back = [ e2x 1*e2y e2z]'; left = [0*e3x e3y e3z]'; right = [1*e3x e3y e3z]'; % Create the faces: % surf(bottom(1,:), bottom(2,:), bottom(3,:)) with appropriate reshape % would plot the bottom face of the unit cube x = [bottom top front back left right]; Kstart = cumsum([1 size(bottom, 2) size(top, 2) size(front, 2) size(back, 2) size(left, 2)]); Kend = [Kstart(2:end)-1 size(x,2)]; % Concatenate and save the start- and end-indices of each face Ndim = [Ny Nx;Ny Nx;Nz Nx; Nz Nx; Nz Ny; Nz Ny]; % Dimensions for each face And then after evaluation of x into the variables Px, Py and Pz: hold on for k=1:length(Kstart) Kcurr = Kstart(k):Kend(k); % Index range of the points for the k-th face surf(reshape(Px(Kcurr), Ndim(k,:)), ... reshape(Py(Kcurr), Ndim(k,:)), ... reshape(Pz(Kcurr), Ndim(k,:)), ... k*ones(Ndim(k,:)), 'EdgeColor', 'none'); % surf each face assigning color k to face k for distinction end This produces the following image: Note that the red line is where two faces meet inside the figure.
Fouries series of $\sin{\sum_n a_n \sin{n\theta}}$
There is no particular formula for the Fourier series this kind of function, other than the definition of Fourier series. While Fourier coefficients are nicely transformed under linear maps, the relation between the Fourier series of $f$ and $\sin f$ is completely opaque. For example, take $\sin \sin \theta$. Its Fourier series begins with $$ 2J_1(0)\sin\theta + (14J_1(1)-8J_0(1))\sin 3\theta +(626 J_1(1)-380 J_0(1))\sin 5\theta +(73534 J_1(1)-42288J_0(1)) \sin7\theta +\dots $$ a bunch of very nice integers of course, but not something you could get directly from $a_1=1$. To say nothing of the Bessel functions appearing in the coefficients above.
Why would the Jacobian not be zero in this case?
What is $U_y$? The jacobian matrix is $J = \begin{pmatrix} 1 & 0 \\ 3v & 3u\end{pmatrix}$ It's determinant is $3u$
Cohomology of the sphere $S^{n}$ with coefficients in abelian group $G$
Use long exact sequences and go by induction. You've done the base case. In order to compute cohomology groups for $\mathbb{S}^{n+1}$, use the long exact sequence in cohomology for the good pair $(\mathbb{D}^{n+1},\partial(\mathbb{D}^{n+1})\cong\mathbb{S}^n)$. Starting at $H^{k-1}(\mathbb{S}^{n})$, we get: \begin{align*} \dots\to H^{k-1}(\mathbb{S}^n)\to H^k(\mathbb{S}^{n+1})\to H^k(\mathbb{D}^n)\to H^{k}(\mathbb{S}^n)\to H^{k+1}(\mathbb{S}^{n+1})\to\dots \end{align*} Here, we used the fact that since $(\mathbb{D}^{n+1},\mathbb{S}^n)$ is a good pair its cohomology is isomorphic to that of $\mathbb{D}^{n+1}/\mathbb{S}^n$, which is $\mathbb{S}^{n+1}$. By the inductive hypothesis and the fact that the cohomology of a disk is just that of a point, this gives us the sequence (where $k<n-1$) \begin{align*} \dots\to 0\to H^k(\mathbb{S}^{n+1})\to 0\to 0\to H^{k+1}(\mathbb{S}^{n+1})\to\dots \end{align*} So, $H^{k}(\mathbb{S}^{n+1})=0$ as well. The LES for $H^{n}$ is slightly different, but try to write out the LES and the same result should follow.
Missing solution after solving system of equations.
The solution from the textbook is correct. The solution you mention is a solution of the homogeneous system with the same matrix, but not of the full system. I don't know why you expect it to appear as a solution of the non-homogeneous system.
Integral $\int\limits_0^1\frac{-2\ln(x)(1+x^2)^2-2(1-x^4)}{(1-x^2)^3}\,\mathrm{d}x$?
Let $f(x)$ be your integrand. Maple gives the antiderivative as $$ \frac{1}{2(x+1)}+\frac{1}{2(x-1)}-\frac{\text{dilog}(x+1)}{2} -\frac{\ln \left( x \right) \ln \left( x+1 \right)}{2} -{\frac {\ln \left( x \right) x \left( x+2 \right) }{2\; \left( x+1 \right) ^{2}}}-\frac{{\text {dilog}} \left( x \right)}{2} +{ \frac {\ln \left( x \right) x \left( x-2 \right) }{2\; \left( x-1 \right) ^{2}}}+{\frac {\ln \left( x \right) x}{2\;(x+1)}}-{ \frac {\ln \left( x \right) x}{2\;(x-1)}} $$ where $$ \text{dilog}(x) = \int_1^x \frac{\ln(t)}{1-t}\; dt $$ In fact, if $F_1(x)$ contains the terms without dilog, $$ F_1(x) = \frac{1}{2(x+1)}+\frac{1}{2(x-1)}-\frac{\ln \left( x \right) \ln \left( x+1 \right)}{2} -{\frac {\ln \left( x \right) x \left( x+2 \right) }{2\; \left( x+1 \right) ^{2}}} +{ \frac {\ln \left( x \right) x \left( x-2 \right) }{2\; \left( x-1 \right) ^{2}}}+{\frac {\ln \left( x \right) x}{2\;(x+1)}}-{ \frac {\ln \left( x \right) x}{2\;(x-1)}}$$ you can verify that $$F_1'(x) - f(x) = - \frac{\ln(x)}{2(x-1)} - \frac{\ln(x+1)}{2x}$$ and $$\lim_{x \to 0} F_1(x) = \lim_{x \to 1} F_1(x) = 0$$ So your integral becomes $$ \int_0^1 \frac{\ln(x)\; dx}{2(x-1)} + \int_0^1 \frac{\ln(x+1)\; dx}{2x} $$ Using the change of variables $u=x+1$ in the second of these and combining the two, we find your integral is $$ \frac{1}{2}\int_0^2 \frac{\ln(x)\; dx}{x-1} $$ which I think is fairly "well-known".
Prove $\{z^n|n \in \mathbb{Z}\}$ is an orthonormal basis of $L^2(S^1)$ with Haar measure
You missed another condition. You need the fact that the span contains the complex conjugate of each of its elements. For this note that the conjugate of $z^{n}$ is $\frac 1 {z^{n}}=z^{-n}$ when $|z|=1$.