title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Simplify $\sqrt{10 + \sqrt{24} + \sqrt{40} + \sqrt{60}} $
From $6 = 2\times 3$, $10 = 2\times 5$, $15 = 3 \times 5$, observe that $10 + 2(\sqrt6 + 2\sqrt{10} + 2\sqrt{15}) = (\sqrt2+\sqrt3+\sqrt5)^2$.
$\mathsf{AC}_{\omega_1}$ and $L(\mathbb R)$
No. For example in Solovay's model, we start with $L$ and an inaccessible cardinal $\kappa$, and then consider $L(\Bbb R)$ in $L[G]$, where $G$ is $L$-generic for collapsing $\kappa$ to be $\omega_1$. In this model there is no set of size $\aleph_1$ of reals, but there is always a surjection from $\Bbb R$ onto $\omega_1$. So $\sf AC_{\omega_1}$ must fail. Other examples come from the natural models of $\sf AD$, which of course also disproves $\sf AC_{\omega_1}$. I have a sneaky suspicion that the large cardinal is necessary here, but I don't see any solution one way or another. I will note that the "standard attempt" of adding $\omega_1$ Cohen reals and looking at $L(\Bbb R)$ will satisfy $\sf AC_{WO}$, that is choice from families which can be well-ordered, so in particular $\sf AC_{\omega_1}$ (this also gives an alternative proof of $\sf DC$).
How much information is in the question "How much information is in this question?"?
There is in fact an existing mathematical definition of this exact concept called Kolmogorov complexity, but you will surely be disappointed because there is a serious catch: it is only defined up to a constant additive factor which depends on your model of computation. But aside from that factor, it is an invariant metric across a wide range of models, so even though we can't pin down the "inherent" Kolmogorov complexity of a particular string without reference to a particular model, if we have two infinite sequences of strings, we can compare them to see if one eventually becomes more complex than the other in a more broadly-applicable sense.
Find a point C on an infinite line AB which, when connecting two other points M and N, would form congruent angles
This is an important problem in optics, where the line $\overleftrightarrow{AB}$ is a mirror, point $M$ is an eye and point $N$ is an object. The point $C$ is then the point on the mirror between the eye and the object's image. Just drop a perpendicular from point $N$ to line $\overleftrightarrow{AB}$ and find the point $N'$ that is the same distance from the line as $N$ is. Point $C$ is then the intersection of line $\overleftrightarrow{AB}$ and line segment $\overline{MN'}$. Point $N'$ is the image of point $N$ in the mirror.
Defining polynomial with exact roots
Here I'm assuming that $P_n(F)$ is the set of polynomials of degree less than $n$ with coefficients in a field $F$. From the wording of your question I deduce that you know that the product of two polynomials is again a polynomial and that $p(\lambda)=0$ if an only if $z-\lambda$ divides $p$. This means that by multiplying together the polynomials $z-\lambda_i$ for $i=1,\dotsc,m$ you obtain a polynomial $p$ such that $p(\lambda_i)=0$ for each $i$. Note that it has minimal degree as long as the $\lambda_i$ are distinct!
4.15 - Exercise - Mathematical Logic - Ebbinghaus
Long comment We have that : $\text{var}(t) \subset \{ v_0,\ldots, v_{n-1} \}$. Thus, for the base case : $t := v_0$ we must have : $t^{\mathfrak A}[g_0]=\langle \ t^{\mathfrak A_i}[g_0(i)] \ \mid \ i \in I \ \rangle$. And this is trivial because the "object" $g_0$ of the direct product $\mathfrak A$ is an element of the domain $\Pi_{i \in I} A_i$, i.e. a "sequence" $\langle \ g_0(i) \in A_i \ \mid \ i \in I \ \rangle$.
(DIFF EQ) Can functions be neither linearly dependent or independent over an interval? Confusing definition.
The functions from number 5 are linearly dependent because$$\color{red}{1}\times(t+1)+\color{red}{(-1)}\times(t^2+1)+\color{red}{1}\times(t^2-t)=0.$$Note the the numbers in red don't depend on $t$.
Is any property of the number TREE($3$) known?
Yes, I have used the last binary (NOT decimal) digits of the tree steps and proven that it is odd and the last 3 binary digits are 011.
Proof that certain operators are compact
a) $T$ is bijective, and $C[0,1]$ is infinite dimensional, so $T$ is not a compact operator. d) After having showed that $T$ is well-defined, define $S_n(x)(t):=\sum_{j=1}^nx(j^{-1})\frac{t^k}{k!}$. What about the dimension of the rank of $S_n$? e) The set $\{x_p\colon t\mapsto t^p,p\in\Bbb N\}\subset C[0,1]$ is bounded, and $T(x_p)(t)=e^{t^p}$. The task is to show that no subsequence of $\{T(x_p)\}$ form an equi-continuous set. f) Arzelà-Ascoli's theorem is useful.
The equation $5^x+2=17^y$ doesn't have solutions in $\mathbb{N}$
Hint: as you say, take the equation modulo $4$, that is, $$5^x + 2 \equiv 17^y \mod 4.$$ The rules of modular arithmetic allow you to substitute reduced representatives modulo $4$ for the bases of the exponents, namely for $5$ and for $17$. What do you get after making the substitutions?
Do all continuous piecewise affine functions belongs to the class ($A_1$) of Muckenhoupt functions?
Here is a counterexample on $(0, \infty)$ (it can be symmetrically extended to all of $\mathbb{R}$. Let $\omega(x)$ be a function s.t. for all integers $n \geq 0$ $$\omega(x) = 1 \,\,\,\,\,\, \forall x \in [n, n + 1/2)$$ and $$\omega(x) = 1 + 2n\left(x - (n + 1/2)\right) \,\,\,\,\,\, \forall x \in [n + 1/2, n + 1)$$ Then on each segment $[n, n+1)$, the function is a constant one on the first half of the segment, and rises up (linearly) to $n + 1$ on the second half. Now consider the set of all balls $B(n + 1/2, 1/2)$. We have: $$\frac{1}{|{B}|}\int_{B(x,r)} \omega(y)\,dy = \frac{1}{2} + \frac{n}{4}$$ But $\omega(n + 1/2) = 1$ for all $n$. We cannot find a mutual constant s.t. $$\frac{1}{2} + \frac{n}{4} \leq C\cdot 1$$ for all $n$.
norm of Ricci curvature and Einstein manifold
It is correct because ‎‎$ ‎||Ric-‎\frac{S}{‎n‎}g||‎^{2}=||Ric||‎^{2}-‎\frac{S‎^{2}‎}{‎n‎} ‎=0‎$‎.
weighted normal equations derivation
Starting with Strang's equation, set $\;F\!=\!WA,\;g\!=\!Wb\;$ to obtain $$\eqalign{ &(A^TW^T)\,(WA)x = (A^TW^T)\,(Wb) \\ &(F^TF)x = F^Tg \\ }$$ The least-squares solution to this system is $$x = (F^TF)^{-1}F^Tg$$ assuming $(F^TF)^{-1}$ exists. If the inverse does not exist, then the solution can be written as $$x=F^+g + (I-F^+F)y$$ where $F^+$ is the Moore-Penrose inverse, $\,\big(I-F^+F\big)\,$ is the nullspace projector, and $y$ is an arbitrary vector.
Let $S$ be the set of all circles on the unit sphere in $\mathbf{R}^3$. Give a smooth manifold structure to $S$.
A circle is just the intersection of the sphere with a plane. The case of point is the case of tangent planes. The set of affine planes in $R^3$ is naturally a manifold of dimension 3 (in fact it is the tautological line bundel over $RP^2$, and your manifold appears as a closed submanifold with boundary the sphere $S^2$ in it. It is sliglty easier to describe the set of oriented circles, which are intersection of oriented planes with the sphere. An oriented plane is given by a unit (normal) vector u, and a parameter $t\in R$ with the equation $<u,X>=t$, where $X=(x,y,z)$ is the generic point. Note that this plane meets the sphere iff $t\leq 1$, so the set of oriented circle is just $S^2\times [-1,1]$. For the case of unoriented circles, you can check that is is the subset of the tautological line bundle over $P^2$ made up with points at the distance $\leq 1$ to the origin. Its boundary is the 2-sphere as expected. Note the beautiff action of $SO(3)$ on this manifold wih is transitive on each family of circle of given radius, proving that this set is a sphere, unless the radius is 0.
Orthogonal transformation and vector product
I think the problem may not be clear enough when saying what it means by invariant, since when saying something is invariant we mean that the final result doesn't depend on the given transformation we us over the initial objects. For example, the dot product in $\mathbb{R}^n$ is invariant under orthogonal transformations: $$A(u) \cdot A(v)= (Au)^T(Av) = u^TA^TAv = u^Tv = u\cdot v$$ In the case of cross product you have that $Au \times Av = detA A(u\times v)$ (your formula is incorrect, proof: http://math.ucr.edu/~res/math132/crossproducts.pdf). So the invariance only happens when $detA=1$, which geometrically means $A$ is a rotation over a given axis. What happens then if $detA=-1$? Edit: Let $x,y,z\in \mathbb{R}^3$ be arbitrary vectors and $A$ an orthogonal matrix. Since $detA\neq 0$ the transformation is onto so there exist a $z'\in \mathbb{R}^3$ such that $z=Az'$. Then, using the notation used in the link: $$\langle Ax\times Ay, z \rangle = \langle Ax\times Ay, Az' \rangle = [Ax\,\,Ay\,\,Az'] = detA[x\,\,y\,\,z']=detA\langle x\times y, z' \rangle = detA \langle A(x\times y), Az' \rangle = detA\langle A(x\times y), z \rangle .$$
Prove that $ \lim_{n \to \infty} \frac{{(\prod_{a=1}^{n} {a^{a^{p}}}})^\frac{1}{n^{p+1}}}{n^{\frac{1}{p+1}}} = e^{-\frac{1}{(p+1)^2}}$
If you take the ln you can rewrite it as : $\frac{1}{n}\sum_{a=1}^{n}(\frac{a}{n})^{p}\ln(\frac{a}{n}) + (\sum_{a=1}^{n}(\frac{a}{n})^{p}-\frac{1}{p+1})\ln(n) $ Now if you use Riemann integral property you find that the sum on the left converges to $\frac{-1}{(p+1)^2}$ and the right term converges to zero hence the result.
Solutions to $\boldsymbol{\mathbf{A}}\boldsymbol{\mathbf{x}} = \boldsymbol{\mathbf{b}}$
Sure, if $Ax_1 = b$ and $Ax_2 = b$ then by linearity, $A(x_1 - x_2) = 0$ hence $x_1-x_2$ is a nullspace vector.
Limit of $\left\lfloor x \left\lfloor \frac1x \right\rfloor \right\rfloor$, as $x$ goes to zero
If we look at $$f(x)=\lfloor \frac1x \lfloor x\rfloor\rfloor$$ Then for $n\in \Bbb{N}$ and $x\in(n,n+1)$ we have that $$\frac1x\lfloor x\rfloor=\frac{n}{x}<1$$ so $f(x)=0$ and for $x\in(-n-1,-n)$ we have that $$\frac1x\lfloor x\rfloor =\frac{-n-1}{x}>1$$ and that is also $<2$ so $f(x)=1$. So you could make a mistake (like I did) by saying $$\lim_{x\to\infty}f(x)=0=\lim_{x\to0^+}f(\frac1x)\\\lim_{x\to-\infty}f(x)=1=\lim_{x\to0^-}f(\frac1x)$$ However the function is actually discontinuous at positive integers because $f(1)=f(2)=\cdots=f(n)=1$ even though $\lim_{x\to n}f(x) = 0$ the overall limit to infinity doesn't exist. The function is continuous for $x<-1$ hence the limit for $-\infty$ is indeed $$\lim_{x\to-\infty}f(x)=1=\lim_{x\to0^-}f(\frac1x)$$
Prove that $(a+1)^{b}>(b+1)^{a}$
As you said, since $f(x)=\sqrt[x]{x+1}$ is decreasing for positive $x$, for $a>b>0$, we can write $$\sqrt[b]{b+1} > \sqrt[a]{a+1}.$$ Raising each side to the $ab$th power will preserve the inequality and give $$(b+1)^a > (a+1)^b.$$ This makes me believe the inequality is supposed to be reversed. Alternatively, it is intended that $b>a>0$ instead of $a>b>0$.
How to determine a function of a matrix is increasing or decreasing
If you're trying to do the calculus on a vector space, remember that the derivative of $$f:\Bbb R^n\to\Bbb R^m$$ is an $m\times n$ matrix. In particular, if $m\ne n$ how do you plan to talk about semi-definiteness? And even if you have $m=n$, recall that semi-definite matrices are also symmetric. The partial derivatives at a point may not be equal everywhere, so what if ${\partial^2f\over\partial y\partial x} \ne{\partial^2f\over\partial x\partial y}$? Since you want the more special case of $f:\Bbb R^n\to \Bbb R$ where a notion of "increasing" is a little more specific, then there is the tool of the directional derivative which can tell you about increasing or decreasing in a specific direction, but again this isn't really related to the semi-definiteness. Especially of note is that the derivative is a vector, not a number, and definitely not a (symmetric) $n\times n$ matrix of any sort. Note that in this case if you are increasing in one direction you are decreasing in the opposite direction, so it's hard to make a canonical choice for which "increasing" is clearly what you mean. Unlike maps $g:\Bbb R\to\Bbb R$ where you have a notion of increasing in the domain, maps from spaces without an ordering are harder to pin down. If you can produce a total ordering on your domain, you could make it more precise, but it's not clear what you do when two things are only in a partial order. In particular: it's not clear any simple fact about the derivative will correspond to the monotonicity in an arbitrary order, which I think is what you're looking for. In your example: I don't think you want the $X$ matrices in there, you want just two matrices, $X\le Y$ where $\le $ means the difference is positive semi-definite. So we want to show $\log |X|\le \log |Y|$. Then $$\log |Y|-\log |X|=\log |YX^{-1}|.$$ Two things to note: ($1$) if you don't demand definite rather than semi-definite, you can have a zero matrix, which makes the log thing undefined. So this is no good unless you demand definite. Also, you can only define this on things which are already positive determinant, since a negative determinant gives $\log$ of a negative number, which is undefined. But then $|XY^{-1}|$ has a negative log iff $|XY^{-1}|<1\iff 0<|X|\le |Y|$. So if we find $X,Y$ with $|X|<|Y|$ but $Y-X$ is positive definite, we have a contradiction. For this choose $$X=\begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix}, Y=\begin{pmatrix} 2 & -6 \\ 1 & 2\end{pmatrix}$$ then $$Y-X = \begin{pmatrix} 1 & -6 \\ 1 & 1\end{pmatrix}, |X|=1, |Y|=2.$$ The sub-determinants of $Y-X$ are $1, 7$ respectively, so it is positive definite. So this is a counterexample for that ordering. At the same time I think your example misses the forest for the trees. The point is that there's not a simple test of when something is increasing based on the derivatives. Whatever it would mean to be increasing at a matrix, there is not going to be a property of a given matrix which relates to the derivative in a tidy way to talk about increasing. In particular, you want, psychologically, for the positive definiteness property of a matrix to be analogous to the positive quality of a real number and for this to translate into an easy test on matrix functions for monotonicity. The problem If you look at the original context you got this from the sign of the derivative takes place in the domain, not the domain. Your question asks about the domain, which is a totally different animal. Now, you might alter the question to talk about the derivative, which is a vector, but--as I have said earlier--there is no canonical notion of positivity for a vector, so there's no way to phrase your question in a way which is actually analogous to the original test you are motivated by. You've imposed some really complicated conditions, but none of them actually have anything to do with your original question, they're just bending it into a direction which is unrelated to the original, and which tries to talk about monotonicity in a totally derivative-unrelated way.
Number of ways to seat boys and girls such that no boys sit together
When there are $ N $ seats the number of ways such that none of the boys sit together is given by $ F_{N+2} $ where $ F_{N} $ is the nth Fibonacci number. Hope it helps.
Quadratic extension of quadratic extensions
For the first one: $e^{\frac{2 \pi i}{3}} = \cos(\frac{2 \pi}{3}) + i \sin(\frac{2 \pi}{3}) = - \frac{1}{2} + \frac{1}{2} \sqrt{-3}$, so $\mathbb{Q}(e^{2 \pi i/3}) = \mathbb{Q}(\sqrt{-3})$. Can you do the second one now?
Variation of Linear Least Squares Minimization Problem
Minimizing $\|Ax + b\|$ is, strictly speaking, not a least squares problem. It just happens to be equivalent to the least squares problem of minimizing $\|Ax + b\|^2$, because $x \mapsto x^2$ is increasing on $[0, \infty)$. Minimizing $\sum_i \|Ax_i + b\|$ is however fundamentally different from minimizing $\sum_i \|Ax_i + b\|^2$. In the latter case, you are adding sums of squares, which gives you another sum of squares. Thus you get another least squares problem, which is what the author is referring to. In the case of $\sum_i \|Ax_i + b\|$, you add square roots of sums of squares, and clearly you don't have $\sqrt{x^2 + y^2} + \sqrt{u^2 + v^2} = \sqrt{x^2 + y^2 + u^2 + v^2}$. That makes this a much harder problem. Deriving the formula in the book can be done as follows, at least if I assume that the $v_j$ are unit vectors, which is necessary to make $I - v_jv_j^t$ a projection matrix. For convenience, put $A_j = I - v_jv_j^t$ while noting that this is a symmetric, idempotent matrix. Then we differentiate $\sum_j\|A_j(P-c_j)\|^2$ termwise to get the following equation for the critical point: $$ \sum_j A_j^tA_j(P-c_j) = 0. $$ Because of the special form of $A_j$ we have $A_j^tA_j = A_j$, which makes it possible to simplify and expand the equation to $$ \left(\sum_j A_j\right)P = \sum_j A_jc_j $$ which obviously leads to the desired result.
"Sharp" Inequalities
Here is another definition of sharpness that demonstrates that the inequality is optimal (i.e., best-possible). If $f:\mathbb{R}^n \longrightarrow \mathbb{R}$, $g:\mathbb{R}^n \longrightarrow \mathbb{R}$, and \begin{equation} f(x) \geq g(x),\tag{1} \label{ineq} \end{equation} $\forall x \in S \subseteq\mathbb{R}^n$, then \eqref{ineq} is called sharp if there is an element $\hat{x} \in S$ such that $f(\hat{x}) = g(\hat{x})$. Why is this inequality the best possible inequality? If \eqref{ineq} is sharp, then there is not a function, say $h:\mathbb{R}^n \longrightarrow \mathbb{R}$, such that \begin{equation} f(x) \ge h(x) > g(x) \end{equation} $\forall x \in S$—otherwise $f(\hat{x}) > g(\hat{x})$. Thus, the inequality is the best-possible.
Irreducible Polynomial over $\mathbb{Z}[X,Y]$
$$\mathbb{Z}[X,Y] \simeq \mathbb{Z}[X][Y] \simeq \mathbb{Z}[Y][X]$$ So you can view $P(X,Y)$ as poynomial in $Y$ of degree $3$ over $\mathbb{Z}[X]$ or as polynomial in $X$ of degree $2$ over $\mathbb{Z}[Y]$(link). A polynomial of degree $2$ or $3$, that is not irreducible, must have a linear factor. So assume that $(X-p(Y))$ is a linear factor of $X^2Y^3+X(Y^2+Y)+8$ from $\mathbb{Z}[Y][X]$. $P(Y)$ must be a fraction $\frac{k}{s}$ where $k$ divisor of $8$ and $s$ is a divisor of $Y^3$ (link). These are finitely many possibilities to check. None of them gives a solution. Check the slightly different polynomial $P(X,Y)=X^2Y^3+XY^2+XY+1$ and you will find a factor with this method, because $(X+\frac{1}{Y})$ is a linear factor. This can be shown by checking that $P(\frac{-1}{Y},Y)=0$ holds.
Problem in the measurability of $f+g$.
As written your proof looks indeed strange. First to prove that $E_\alpha$ is measurable you have to find an equality s.t. the rhs is measurable (an inclusion is not enough). Furthermore the usual to prove that $f+g$ is measurable is to notice that: $$x \in E_\alpha \iff \exists r \in \mathbf{Q} \; f(x)>r>\alpha-g(x)$$ Thus: $$E_\alpha = \bigcup_{r\in \mathbf{Q}}\{y \; \vert \; f(y)>r\} \cap \{y \; \vert \; g(y)>\alpha-r\}$$ Since $\mathbf{Q}$ is countable, the rhs is measurable. I don't know exactly what the sequence $(r_n)_{n\in \mathbf{N}}$ is, maybe an enumeration of $\mathbf{Q}$?
Asymptotics of incomplete Beta function $B_{1/2}(y+1,y)$ when $y\to\infty$
This is straightforward steepest descent (or Laplace's method). Let $f(x):=\ln(x)+\ln(1-x),$ and $g(x)=(1-x)^{-1}$ so that the integral becomes: $$\int_0^{1/2}g(x)e^{yf(x)}dx.$$ On $[0,1/2]$, $f$ achieves its maximum at $x_0=1/2$. Furthermore, $f''(x)=-1/x^2-1/(1-x)^2$, and $f''(x_0)<0$. Then Laplace's method gives: $$\int_0^{1/2}g(x)e^{yf(x)}dx\sim \sqrt{\frac{2\pi}{y\,|f''(x_0)|}}\,g(x_0)\,e^{yf(x_0)}$$ in the sense that the ratio of the LHS and RHS converges to $1$ when $y\to+\infty$. Can you wrap it up from here? Note the extra $1/\sqrt{y}$ that's going to contribute.
(FD) Quotient by commutator ideal a subalgebra?
Look at $$\mathcal A= \{ f: [0,1]\to M_2(\Bbb C) \mid f(0) = a\,e_{11}, f(1)= b\,e_{22}, \ \ a,b\in \Bbb C\}$$ where $e_{ij}$ is the usual basis of $M_2(\Bbb C)$. Any commutator must vanish at $0$ and $1$. Outside of those two points note that any matrix in $M_2(\Bbb C)$ is in the commutator ideal of $M_2(\Bbb C)$. In particular you may write $e_{ij} = \sum_k A_k [B_k, C_k]$, further you may write any function $f$ on $[0,1]\to\Bbb C$ as a product of $3$ functions to get: $$f(x) e_{ij} = f_{ij}(x) \sum_k A_k[B_k,C_k]= \sum_k h^{1}(x) A_k [h^2(x) B_k, h^{3}(x) C_k]$$ where if $f$ vanishes on $0$ and $1$ you may assume all the $h^k$ to do so also. So: The commutator ideal of $\mathcal A$ is equal to all functions to $M_2(\Bbb C)$ vanishing on $0$ and $1$. The quotient of $\mathcal A$ by this ideal is then $\Bbb C^2$. Now lets check that $\Bbb C^2$ is not a sub-algebra of $\mathcal A$. The simplest way to do this is to note that $\mathcal A$ doesnt have two disjoint self-adjoint projections: If $p$ is a projection then it must take on either the values $0$ or $e_{11}$ (resp $e_{22}$) on at $0$ (resp $1$). If we have two disjoint projections then at least one of them needs to take on $0$ at one of the endpoints (else the product is not zero). If a projection $p$ has $p(x)=0$ for some $x$ then there must be some $x'$ with $0<\|p(x')\|<1$ (else its constant $0$). But the equation $$p(x') \overset!= p(x')^2$$ cannot be satisfied for any self-adjoint matrix of norm greater than $0$ and less than $1$.
N-P Lemma Test in Hypothesis testing and significance of inequality sign.
k is chosen as a value sufficiently large that when the likelihood ratio is greater than or equal to k the p-value is less than or equal to the significance level and you will reject the null hypothesis. Hence the reason for >=. Using algebra the likelihood ratio simplifies to a function of the observed data and the parameters. Neyman and Pearson use the frequentist approach to statistical inference. Parameters are constants that are unknown. They are not assumed to have prior distributions. Those assumptions apply to the Bayesian approach not the frequentist approach.
Metric on a Quotient of the Riemann Sphere
This answer necessarily omits important details because there is not a unique metric on a sphere, because the term metric can refer to different types of structure (particularly, to a "topological metric" or to a Riemannian matric), and because the intended purpose of the requested metric on the quotient has not been explained. A complex number $w = u + iv$ with $u$ and $v$ real corresponds, under stereographic projection from the north pole, to $$ (x, y, z) = \frac{(2u, 2v, u^{2} + v^{2} - 1)}{u^{2} + v^{2} + 1}. \tag{1} $$ The chordal metric on the sphere is defined by $$ d((x_{1}, y_{1}, z_{1}), (x_{2}, y_{2}, z_{2})) = \sqrt{(x_{2} - x_{1})^{2} + (y_{2} - y_{1})^{2} + (z_{2} - z_{1})^{2}}. \tag{2a} $$ The great circle metric on the sphere is defined by $$ d'((x_{1}, y_{1}, z_{1}), (x_{2}, y_{2}, z_{2})) = \arccos(x_{1}x_{2} + y_{1}y_{2} + z_{1}z_{2}). \tag{2b} $$ Combining, you get a metric $$ d(w_{1}, w_{2}) = d(u_{1} + iv_{1}, u_{2} + iv_{2}) $$ on the complex plane by using (1) to associate points $(x_{1}, y_{1}, z_{1})$ and $(x_{2}, y_{2}, z_{2})$ to your complex numbers, then using (2a) or (2b) or some other metric of your choosing on the sphere. The induced metric on the quotient of the sphere by the involution $w = u + iv \mapsto \frac{1}{w} = \frac{u - iv}{u^{2} + v^{2}}$ can be found by taking two complex numbers $w_{1}$ and $w_{2} \neq w_{1}$, $1/w_{1}$ and using the scheme of the preceding paragraph to find the smaller of $d(w_{1}, w_{2})$ and $d(1/w_{1}, w_{2})$. This algorithm can, in principle, be expressed as an explicit formula, but to do so would be uselessly messy.
Equivalence Relation with dividing integers
Given that $x \sim y \iff x+3y \equiv 0 \mod 4$ for all $x, y \in \mathbb{Z}$ 1) Reflexive property: We need to prove $x \sim x$ for all $x \in \mathbb{Z}$ $x+3x = 4x \equiv 0 \mod 4$ Hence, $x \sim x$ 2) Symmetry property: We need to prove that $x \sim y \implies y \sim x$ for all $x, y \in \mathbb{Z}$ $x \sim y \implies x+3y \equiv 0 \mod 4 \implies 3(x+3y) \equiv 0 \mod 4 \implies y+3x \equiv 0 \mod 4 = y \sim x$ 3) Transitive Property : We need to prove $x \sim y \land y \sim z \implies x \sim z$ for all $x, y, z \in \mathbb{Z}$ $x \sim y \land y \sim z \implies x+3y \equiv 0 \mod 4 \land y+3z \equiv 0 \mod 4$ $\implies (x+3y)+(y+3z) \equiv 0 \mod 4 \implies x+3z \equiv 0 \mod 4 = x \sim z $ Thus given relation is an Equivalence relation.
Sheaves of analytic functions were once called multi-valued functions.
For example, the square root function has two branches. On the reals, you may choose the principal branch, but over the complex plane there is no consistent way to do this without excluding a branch cut. One solution is to treat the square root as a multi-valued function. $\sqrt{r^2e^{i2\theta}}=\{re^{i\theta},re^{i(\theta+\pi)}=-re^{i\theta}\}.$ The more modern approach uses the language of sheaf theory. The sheaf of germs of the square root function defines a two-sheeted cover of the complex plane. A sheaf is an assignment of a fiber to each point (subject to some conditions), so the answers are equivalent.
Assuming matrices $A,B,C \;\text{and}\;D\;$ are $ n\times n,$ show two ways that $(A+B)(C+D)=AC+AD+BC+BD$
You can use the distributive property of matrices to show this the second way. Observe that $(A+B)$ and $(C+D)$ are each themselves $n\times n$ matrices. Using right distribution, $(A+B)(C+D) = (A+B)C + (A+B)D$. Then distributing to the left of each term, we obtain the desired result.
A continuous bijection from the Cantor Set to [0,1]
On the one hand we have $C=\{ \sum_{k \in N}\frac{a_k}{3^k}$, where $a_k\in \{0,2\} \}$. On the other hand we have $[0,1]=\{ \sum_{k \in N}\frac{b_k}{2^k}$ where $b_k \in \{0,1\} \}$. We put $f(\sum_{k \in N}\frac{a_k}{3^k})=\sum_{k \in N}\frac{c_k}{2^k}$ where $c_k=0$ if $a_k=0$ and $c_k=1$ if $a_k=2$. Then $f$ is a continuous surjection. It is not injective, because $f(\frac{1}3)=f(0.0222...)=\frac{1}{2}=f(0.2)=f(\frac{2}{3})$.
Intermediate value theorem to show there are at least two values of x
Indeed, there is only one solution to $x=1+\sin(x)$ in $(-\pi,\pi)$. Their graphs look like this:
How many usernames are there and how do you calculate it?
There are 26 uppercase letters, 26 lowercase letters, and 10 numerical digits, giving 62 possibilities for each of the eight characters. So there are $62\times 62\times \ldots \times 62 = 62^8$ possible usernames, which is a massive number. This is in case 2, when anything goes. In case 1, once you have chosen a character, you are left with 61 possibilities for the next, then 60 possibilities for the next, giving the answer as $62\times 61\times \ldots \times 55$ usernames. In general, therefore, with a string of $n$ characters you have $62^n$ possible usernames. With no repetitions, this becomes $\frac{62!}{(62-n)!}$ usernames.
Arclength of Complex Exponential
As mentioned in the comments, the issue is the lack of a square root in the first displayed equation.
Prove $\int\limits_0^{\infty}\left | \frac{\cos(x)}{1+x}\right |dx$ diverges
HINTS: Write the integral as the sum $$\begin{align} \int_{\pi/2}^\infty \frac{\cos(x)}{1+x}\,dx&=\sum_{k=1}^\infty \int_{(2k-1)\pi/2}^{(2k+1)\pi/2}\frac{|\cos(x)|}{1+x}\,dx\\\\ &=\sum_{k=1}^\infty \int_{\pi/2}^{3\pi/2}\frac{-\cos(x)}{1+x+k\pi}\,dx\\\\ \end{align}$$ Then, note that $\frac1{1+{(k+3/2)\pi}}\le \frac{1}{1+x+k\pi}\le \frac{1}{1+(k+1/2)\pi}$ when $x\in[\pi/2,3\pi/2]$ and that $\int_{\pi/2}^{3\pi/2}(-\cos(x))\,dx=2$.
Do I understand the Chevalley Restriction Theorem correctly?
No. Look at the matrix $\left(\begin{smallmatrix}0 & -1 \\ 1 & 0\end{smallmatrix}\right) \in \mathfrak{g}=\mathfrak{sl}(2)$. This is a sum of two elements of two root spaces using the standard Cartan. But the trace of the square is not zero, and trace of the square is an invariant polynomial.
Product of orthonomal basis of $L_2([0,1])$ is orthonomal basis $L_2([0,1]^2)$?
First verify that $\{v_{k,l}\}_{k,l \ge 0}$ is an orthonormal set in $L^2([0,1]^2)$: \begin{align} \langle v_{k,l}, v_{r,s}\rangle &= \iint_{[0,1]^2} v_k(x)v_l(y)\overline{v_r(x)v_s(y)}\,dx\,dy \\ &= \left(\int_{[0,1]} v_k(x)\overline{v_r(x)}\,dx\right)\left(\int_{[0,1]} v_l(y)\overline{v_s(y)}\,dy\right)\\ &= \langle v_k, v_r\rangle \langle v_l, v_s\rangle\\ &= \delta_{kr}\delta_{ls} \\ &= \delta_{(k,l),(r,s)} \end{align} Now it suffices to show that $\{v_{k,l}\}_{k,l \ge 0}$ is a maximal set, i.e. if $h \perp v_{k,l}, \forall k,l \ge 0$, then $h = 0$. Indeed, we have $$0 = \langle h, v_{k,l}\rangle = \int_{[0,1]}\left(\int_{[0,1]}h(x,y)v_k(x)\,dx\right)v_l(y)\,dy = \left\langle y \mapsto \int_{[0,1]}h(x,y)v_k(x)\,dx, v_l\right\rangle, \forall l \ge 0$$ so $y \mapsto \int_{[0,1]}h(x,y)v_k(x)\,dx$ is equal to $0$ for almost every $y$, for every $k \ge 0$. Hence $$E_k = \left\{y \in [0,1] : \int_{[0,1]}h(x,y)v_k(x)\,dx \ne 0\right\}$$ has measure zero and so $E = \bigcup_{k \ge 0} E_k$ also has measure zero. It follows that for every $y \in [0,1] \setminus E$ we have $$\langle x \mapsto h(x,y), v_k(x) \rangle = \int_{[0,1]}h(x,y)v_k(x)\,dx = 0, \forall k \ge 0$$ and so $h(x,y) = 0$ for almost every $x \in [0,1]$. Therefore $h= 0$ almost everywhere which completes the proof.
How to work out miles between Longitude values based on a Latitude value.
Firstly, let us assume that 'latitude' is found by the angle between the north pole and your position on the surface, as opposed to equal arc-lengths along the surface. No one wants to play around with elliptical integrals of the second kind on a whim. As said by Matt B let us assume that the Earth is an ellipsoid with the semi-major axis along x and y being equal, let's call that A (which has a value of 6399592m) and the semi-minor axis along z joining the poles of length B (which has a value of 6335437m). Let us take the trace of the ellipsoid intersecting the x-z plane and parametrise it in terms of the latitude $ \theta $. We will take the north pole to be 0° and the equator to be 90°. For a given latitude therefore, the positions for all longitudes are found on the locus of a circle of radius $ A \sin \phi $. As long as you measure the longitude angle in radians, the distance between two points on the same latitude is just $A \sin \phi \Delta \theta $. If you are willing to approximate to a sphere, you can use the following result from Spherical Trig to work out the angle: $$ \cos s = \sin^2 \phi + \cos^2 \phi \cos \Delta \theta $$ Multiply this s by the average of the two distances above, and you have an approximate value. If you are looking for a more empirical answer you'll need to use reference ellipsoids such as WGS84.
Checking for convergence of $\sum_{n=1}^\infty \frac{(-i)^n}{n}$
Just apply Dirichlet's test and use the fact that, for each natural $N$,$$\left\lvert\sum_{n=1}^Ni^n\right\rvert=\left\lvert\frac{i-i^{N+1}}{1-i}\right\rvert\leqslant\sqrt2.$$
Is $\nabla\phi\nabla\psi$ a scalar product or a dyadic product?
Your calculation is correct. You've clearly shown that when the author of the identity you're checking wrote $2\nabla\phi\nabla\psi$, they really meant $2\left(\nabla\phi\right)\cdot\left(\nabla\psi\right)$. Let's just look at the types of objects involving, though. On the LHS of your identity you have $\Delta (\phi\psi)$. $\phi$ and $\psi$ are (presumably) scalar fields and we know that the product of scalar fields is also a scalar field. Then applying the Laplacian operator to a scalar field, it still remains a scalar field. On the RHS you have three terms. From my statements in the above paragraph we can see immediately that $\phi(\Delta\psi)$ and $\psi(\Delta\phi)$ are scalar fields. The term we're not sure of is $2\nabla\phi\nabla\psi$. Whatever type of object it is, when we add it to a scalar field (the other terms) we should get back a scalar field (the LHS). The only reasonable thing it could be then is another scalar field. But what if it weren't? Maybe $2\nabla\phi\nabla\psi$ is supposed to be the dyadic product of $\nabla \phi$ and $\nabla \psi$. In that case it'd be a rank 2 tensor field. But the sum of a rank 2 tensor field and a scalar field is undefined. And even if it were defined, it probably wouldn't be a scalar field, which is what it'd have to be for equality to hold in this identity. So the authors must have meant for it to be a scalar field. This was just a little bit of reasoning to see what type of object $2\nabla\phi\nabla\psi$ is supposed to be. For the sake of clarity if you use this identity in the future I'd suggest you write this term as $2\nabla\phi\cdot\nabla\psi$ or even $2\left(\nabla\phi\right)\cdot\left(\nabla\psi\right)$ to avoid anyone else getting confused about what it is supposed to mean. I should note that I've seen a couple of people -- including my old QM professor -- use $\mathbf v \mathbf w$ (where $\mathbf v, \mathbf w$ are vector quantities) to denote the dot product, but I'd discourage this myself. As for your question about matrices, row and column matrices can be swapped in many circumstances. For instance when just taking linear combinations of vectors, it doesn't really matter whether you're denoting the vectors as row or column matrices. But when it comes to matrix multiplication row and column matrices act very differently. In fact, what you've written in your question is incorrect. $\mathbf a\mathbf b$ is not defined if $\mathbf a, \mathbf b$ are both column matrices. Instead the product you are doing should have been written $\mathbf a \mathbf b^T$ where the $^T$ represents transposition of the matrix. Now let's compare the dyadic and dot products of column matrices: The dyadic (or outer) product is $\mathbf a\mathbf b^T$ and the dot (or inner) product is $\mathbf a^T\mathbf b$. These two products are not the same and in fact yield two different types of matrices, as you can confirm using the rules for matrix multiplication.
Checking nature of angles of a triangle given the equations of the three lines that form a triangle
The nature of the triangle can be determined without explicitly finding its vertices. Let $s_{ij} = (a_ia_j+b_ib_j)(a_ib_j-b_ia_j)$ and examine the signs of $s_{12}$, $s_{23}$ and $s_{31}$. (The order of the indices of the last one is important.) If they are all positive or all negative, then the triangle is acute. If one of them is zero and the others are either both positive or both negative, it is a right trangle. If one is negative and the other two positive, or vice-versa, the triangle is obtuse. If the odd man out is $s_{ij}$, the obtuse angle is at the intersection of lines $\ell_i$ and $\ell_j$. Any other combination of signs means that the three lines do not form a triangle. Notice that the $c_i$’s don’t enter into this solution. This makes sense: If you vary $c_i$ while holding $a_i$ and $b_i$ fixed, you get a family of parallel lines. The resulting triangles are all similar, so their nature depends only on the directions of the three lines. Why does this work? (Rather long) There’s a lot of detail below, but the key to it all is that doubling the angles lets us “factor out” the direction in which we move along the lines. Suppose we have vectors $\mathbf u=\langle u_1,u_2\rangle$ and $\mathbf v=\langle v_1,v_2\rangle$. Their dot product is defined as $$\mathbf u\cdot\mathbf v = \|\mathbf u\|\|\mathbf v\|\cos\theta = u_1v_1+u_2v_2.$$ Note that the dot product is symmetric: $\mathbf u\cdot\mathbf v = \mathbf v\cdot\mathbf u$. Also, if neither vector is zero, the sign of their dot product is determined by the angle $\theta$ between them: positive if $0\le\theta\lt\frac\pi2$, zero if $\theta=\frac\pi2$ and negative if $\frac\pi2\lt\theta\le\pi$. (This property of the dot product is used in Aretino’s answer.) The (signed) area of the parallelogram with sides defined by $\mathbf u$ and $\mathbf v$ is $$\|\mathbf u\|\|\mathbf v\|\sin\theta = \det\left[\matrix{u_1&u_2\\v_1&v_2}\right] = u_1v_2-u_2v_1.$$ If you’re familiar with the cross product, you might recognize this expression as the $z$-component of $\mathbf u\times\mathbf v$, treating them as three-dimensional vectors. Here, too, assuming that neither vector is zero, the sign is determined by the angle between them: positive if a counterclockwise rotation takes $\mathbf u$ onto $\mathbf v$; zero if they’re parallel or antiparallel; negative if the rotation is clockwise. That is, the sign of this value encodes the relative orientation of the two vectors. An immediate consequence is that this expression is antisymmetric in $\mathbf u$ and $\mathbf v$—swapping them flips the sign of the result. In the following, we’re not really interested in the magnitude of this value, only its sign, as that gives us a way to determine the relative orientation of two vectors. If we multiply these two together, we get $$(u_1v_1+u_2v_2)(u_1v_2-u_2v_1) = (\|\mathbf u\|\|\mathbf v\|\cos\theta)(\|\mathbf u\|\|\mathbf v\|\sin\theta) = \frac12\|\mathbf u\|^2\|\mathbf v\|^2\sin{2\theta}.$$ Call this $S(\mathbf u,\mathbf v)$ to reduce clutter. Note that this, too, is antisymmetric, i.e., $S(\mathbf u,\mathbf v)=-S(\mathbf v,\mathbf u)$. It also has the easily-verified property that $S(\mathbf u,\mathbf v)=S(-\mathbf u,\mathbf v)=S(\mathbf u,-\mathbf v)=S(-\mathbf u,-\mathbf v)$, that is, that you can reverse either or both of the vectors without changing the value of $S$. It’s also rotationally invariant—rotating both vectors by the same angle leaves $S$ unchanged. We start as in Aretino’s answer: Let $P_{ij}$ be the intersection of $\ell_i$ and $\ell_j$ with corresponding angle $\alpha_{ij}$ of the triangle. Consider the vectors $\mathbf u=P_{23}-P_{12}$ and $\mathbf v=P_{31}-P_{12}$, which represent the two sides of the triangle that meet at $P_{12}$. That is, $\mathbf u$ is a directed segment of $\ell_2$ and $\mathbf v$ a directed segment of $\ell_1$. The dot product $\mathbf u\cdot\mathbf v$ will be positive if $\alpha_{12}$ is acute, zero if it’s a right angle and negative if obtuse. We can make a similar argument for each of the other two vertices. Assume now that the triangle is oriented such that traversing its vertices in the order $P_{12}$, $P_{23}$, $P_{31}$ takes you in a counterclockwise direction, and at each vertex choose $\mathbf u$ and $\mathbf v$ so that we go counterclockwise from $\mathbf u$ to $\mathbf v$. With these conditions, $S(\mathbf u,\mathbf v)$ will also be positive, zero or negative, respectively, as the corresponding angle of the triangle is acute, right or obtuse. Reversing the direction in which the vectors point doesn’t change $S$, so we can cut down on the proliferation of vectors by only using ones that follow the order of the vertices: $P_{23}-P_{12}$, $P_{31}-P_{23}$ and $P_{12}-P_{32}$. If the vertices are instead arranged in a clockwise direction, this amounts to swapping $\mathbf u$ and $\mathbf v$ at each vertex, which flips the sign of $S(\mathbf u, \mathbf v)$ at all three. Putting all of this together we get the three rules at the top, albeit in terms of vectors that represent the known sides of the triangle. Since we care only about the sign of $S(\mathbf u,\mathbf v)$ we don’t really need the lengths of the sides of the triangles, we just need vectors that point in the same directions. We can read those off from the equations of the lines: both $\langle-b_i,a_i\rangle$ and $\langle b_i,-a_i\rangle$ are parallel to $\ell_i$. It’s not immediately obvious which of these is the right one to use, but because we can negate either or both of the arguments to $S$ without changing its value, it doesn’t matter. Setting $s_{ij}=S(\langle-b_i,a_i\rangle,\langle-b_j,a_j\rangle)$ and expanding gives us the result at the top. We can also use vectors perpendicular (normal) to the sides—$\langle a_i,b_i\rangle$ or their negations—instead of vectors parallel to the sides since $\langle a_i,b_i\rangle$ is $\langle b_i,-a_i\rangle$ rotated 90 degrees and we can negate the arguments to $S$ at will. I like to have a geometric view to back up the algebra, so assume that we got lucky: the triangle is oriented counterclockwise and the equations of the lines are such that all of the $\mathbf{n}_i$ point outwards, as in the diagram below. In this configuration, the angle $\theta_{ij}$ between vectors $\mathbf{n}_i$ and $\mathbf{n}_j$ can easily be seen to be the complementary angle to $\alpha_{ij}$, namely $\pi-\alpha_{ij}$. (See the perpendicular bisector theorem for triangles if you want something more rigorous.) This flips the sign of $\sin{2\theta}$, but we’ve also reversed their relative orientation, so the sign of $S$ for $\alpha_{ij}$ is unchanged by using normals. Thus, the rules for determining the nature of a counterclockwise triangle using outward-pointing normals are the same as before. Now, what happens if we reverse one of these vectors, say $\mathbf{n}_1$? This adds $\pi$ to the angles between it and its neighbors, which changes the sign of both the sine and cosine of these angles, so the sign of their product is unchanged. Reversing two of the vectors is the same as reversing one and rotating by $\pi$, while reversing all of them is the same as rotating them all by $\pi$. Rotating all of the vectors by the same amount doesn’t change the angles between them, so the signs of the $s_{ij}$ are unchanged by any of these reversals. If the triangle is oriented clockwise, we’ve already seen that this just changes the sign of all of the $s_{ij}$. Thus, the rules for determining the type of triangle also hold if we work with vectors normal to the lines, and it doesn’t matter which of the two normal directions we choose for each. As a final note, you can dig a bit deeper to determine if the triangle is isosceles or equilateral by using the fact that $\tan\theta_{ij}={\sin{\theta_{ij}}\over\cos{\theta_{ij}}}={a_ib_j-b_ia_j\over a_ia_j+b_ib_j}$.
Cauchy integral formula for $f(z) = \frac{z-a}{z+a}$
You've done most of the work: you just need to make sure that the contour has only the pole at zero inside it, so take $\int_{|z|=r}$ where $r<\lvert a \rvert$ instead. Now, what has happened on taking the reciprocal is that the multiple pole at zero, originally the only pole inside the contour, is now outside, while the simple pole at $z=-a$ has become one at $w=-1/a$, which now lies inside the contour. You can use the Cauchy Integral Formula to write down the integral at this value of $w$: $$ \frac{n!}{2\pi i} \int_{|w|=1/r} - \frac{1/a-w}{1/a+w} w^{n-1} \, dw = n! \cdot -(1/a-(-1/a)) (-1/a)^{n-1} = -2(-1/a)^n n!, $$ as desired.
Find the radius of convergence
Let $r_n:=a_{n} /a_{n-1}$. By assumption, $r_n=r_{n-1}l\left(1 +\varepsilon_n\right)$, where $\lim_{n\to +\infty}\varepsilon_n=0$. If $l\gt 1$, there exists $n_0$ such that $l\left(1 +\varepsilon_n\right)\gt (l+1)/2\gt 1 $ for $n\geqslant n_0$ hence $r_n\to +\infty$. If $l\lt 1$, there exists $n_0$ such that $l\left(1 +\varepsilon_n\right)\lt (l+1)/2\lt 1 $ for $n\geqslant n_0$ hence $r_n\to 0$. Conclude by the ratio test.
Find all possible values $\sqrt{i}+\sqrt{-i}$
Hint: $i=\frac{1}{2}(1+i)^2$ and $-i=\frac{1}{2}(1-i)^2$.
Curvature flow for convex planes curves
It's not about plane curves, but here's an example. In 1974, Firey introduced a model to describe the evolution of a stone submitted to abrasive processes (itself a step on the very old question of understanding the shape of pebbles). The model boils down to a (Gauss) curvature flow. In 1999, Ben Andrews proved the 2D equivalent of the evolution result you quoted: under the curvature flow, every (strictly) convex surface remains convex and (after rescaling) converges to a sphere. So, in a way, the evolution of pebbles is a real life phenomenon related to curvature flows. You can find more information on this kind of questions in chapter VII of Marcel Berger's Geometry Revealed. VF. Voici un élément de réponse, qui ne concerne cependant pas les courbes planes. En 1974, Firey introduisit un modèle pour décrire l'évolution d'une pierre soumise à l'abrasion (ce qui est une étape vers la compréhension de la forme des galets, une question remontant à la plus haute Antiquité). Le modèle n'est rien d'autre qu'un flot de courbure (de Gauss). En 1999, Ben Andrews a démontré l'équivalent bidimensionnel du résultat que tu as cité : sous le flot de courbure, les surfaces strictement convexes restent strictement convexes et convergent (modulo renormalisation) vers la sphère. Ainsi, la formation des galets est un exemple de phénomène de la Vie Réelle™ lié aux flots de courbure. Tu trouveras plus d'information à ce sujet au chapitre VII du (magnifique) Géométrie vivante de Marcel Berger.
What would this do? Integral math, trying to figure out how integrals work.
This is meaningless. $\lim_{a\to b}$ by itself doesn't mean anything. You need to put a function of $a$ next to it, for example $\lim_{a\to b}f(a)$. With your edit, it's technically meaningful but I doubt it's the solution to an interesting and natural problem.
Derivative in the sense of distributions
You must guess which is the derivative. Since $|\cos x|$ is differentiable almost everywhere, the pointwise derivative should work. Then you must verify that the pointwise derivative satisfies the definition of weak derivatives.
Find the asymptotes of the Folium of Descartes ($x^3+y^3-3xy=0$)
I notice that in your parametrization $$ x+y+1 = \frac{3t}{1+t^3} + \frac{3t^2}{1+t^3}+1 = \frac{(1+t)^2}{1-t+t^2}$$ so $x+y+1=0$ is the asymptote.
How do I find coefficient of $x^{10}$ in $(1+x+x^2+...+x^6)^3$
I will use the standard notation $[x^n]\,f(x)$ for denoting the coefficient of $x^n$ in the Taylor series of $f(x)$ centered at the origin. We have: $$ [x^{10}](1+x+\ldots+x^6)^3 = [x^{10}]\frac{(1-x^7)^3}{(1-x)^3} \tag{1}$$ and the RHS of $(1)$ is simple to compute by convolution, since both the Taylor series of $(1-x^7)^3$ and $\frac{1}{(1-x)^3}$ are well-known: the first one is an instance of the binomial theorem, the latter follows from stars and bars: $$ \frac{1}{(1-x)^3} = \sum_{n\geq 0}\binom{n+2}{2}x^n \tag{2}$$ and the coefficient of $x^{10}$ in the following product $$ \left(\sum_{n=0}^{3}\binom{3}{n}(-1)^n x^{7n}\right)\cdot\left(\sum_{n\geq 0}\binom{n+2}{2}x^n\right) \tag{3} $$ is given by the following Cauchy product/convolution: $$ \sum_{n=0}^{1}\binom{3}{n}(-1)^n\binom{(10-7n)+2}{2}=36.\tag{4} $$ This approach can be applied for finding $[x^a](1+x+\ldots+x^b)^c$ too.
I can't understand this limit of a sum proof
This is the rationale. We arbitrarily pick an $\epsilon > 0$. We want to find a $\delta$ where where $|x-a|< \delta$ will mean that $|(f(x)+g(x)) -(K+L)| < \epsilon$. how will we do that? Well, $|(f(x)+g(x)) -(K+L)|=|(f(x)-K)+(g(x)-L)| \le |f(x)-K| + |g(x)-L|$. SO if we can prove that $|f(x)-K| + |g(x)-L| < \epsilon$ we will be done. Now $f(x)\to K$ so for any $\overline \epsilon > 0$ we can find a $\delta_1$ so that $|x-a|< \delta_1$ implies $|f(x)-K| < \overline \epsilon$. And $g(x)\to L$ so for any $\epsilon' > 0$ we can find a $\delta_2$ so that $|x-a| < \delta_2$ implies $|g(x)-L|< \epsilon'$. So if $|x-a| < \min(\delta_1, \delta_2)$ the we would have $|f(x)-K| + |g(x)-L| < \overline \epsilon + \epsilon'$. But we WANT $|f(x)-K| + |g(x)-L| < \epsilon$. So if we can have $\overline \epsilon + \epsilon' = \epsilon$ we will be done. ... that is our proof would be..... For $\epsilon > 0$ let $\overline \epsilon$ be so that $0 < \overline \epsilon < \epsilon$ and let $\epsilon' = \epsilon -\overline \epsilon$ so that $\epsilon = \overline \epsilon + \epsilon'$ There exist $\delta_1$ and $\delta_2$ so that if $|x-a| < \delta_1$ then $|f(x)-K|< \overline \epsilon$ and if $|x-a| < \delta_2$ then $|g(x) -L|< \epsilon'$. So if $\delta = \min(\delta_1, \delta_2$ then $|x-a|< \delta$ means $|x-a|< \delta_1$ so $|f(x)-K|< \overline \epsilon$ and as well, $|x-a|< \delta_2$ so $|g(x)-L|< \epsilon'$. And so $|x-a|<\delta$ means $|(f+g)(x) - (K+L)| \le |f(x)-K|+|g(x)-L| < \overline \epsilon + \epsilon' = \epsilon$. So $\lim_{x\to a} (f+g)(x) = K + L$. Now the only thing left is to figure out for any arbitrary positive $\epsilon$ how do we find positive $\overline \epsilon, \epsilon'$ so that $\overline\epsilon + \epsilon' = \epsilon$? And... well,.... if $\overline \epsilon = \frac {\epsilon}2$ and $\epsilon' = \frac {\epsilon}2$ then $\overline \epsilon + \epsilon' =\epsilon$ is a good choice. We could have chosen any other pair though.
English or French translation of a paper in German
The paper is quite short, so I’ve simply summarized it here. It’s an addendum to a paper by Stolz with the same title in vol. $14$, pp. $231$-$240$. He says that when he wrote the earlier paper he missed a relevant note by V. Rouquet. Rouquet, he says, starts from the following lemma (in which I’ve modernized the notation and some of the terminology): Lemma. If $F(x)$ is continuous at all finite $x>x_1$ and equipped with a derivative $F'(x)$ that has a limit as $x\to+\infty$, then the fraction $F(x)/x$ also has a limit as $x\to+\infty$, given by $$\lim_{x\to+\infty}\frac{F(x)}x=\lim_{x\to+\infty}F'(x)\;.$$ In a footnote he remarks that although Rouquet’s proof assumes that $\lim F'(x)$ is finite, the lemma holds when $\lim_{x\to+\infty}F'(x)=+\infty$ as well and gives a proof of this. He notes that the lemma does not require that $F(x)$ itself have a limit as $x\to+\infty$, offering the example $F(x)=\sin\sqrt{x}$. He says that Rouquet then goes on to infer that if $y=f(x)$ and $z=\varphi(x)$ are continuous functions that both tend to $+\infty$ as $x$ approaches a finite $a$ or tends to $+\infty$, one can regard $y$ as a function of $z$, and since $$\frac{dy}{dx}=\frac{f'(x)}{\varphi'(x)}\;,\tag{1}$$ we have the following result: if under these hypotheses $\lim\frac{f'(x)}{\varphi'(x)}$ exists, then so does $\lim\frac{f(x)}{\varphi(x)}$, and the two limits are equal. Stolz then points out that this is not always true. For example, if $f(x)=x+\sin x\cos x$ and $\frac{\varphi(x)}{f(x)}=e^{\sin x}$, then $\lim_{x\to+\infty}f(x)=\lim_{x\to+\infty}\varphi(x)=+\infty$, and $$\frac{f'(x)}{\varphi'(x)}=e^{-\sin x}\cdot\frac{2\cos x}{x+\sin x\cos x+2\cos x}\longrightarrow 0\quad\text{as}\quad x\to+\infty\;,$$ but $\frac{f(x)}{\varphi(x)}=e^{-\sin x}$ oscillates over the range between $\frac1e$ and $e$ and has no limit as $x\to+\infty$. Therefore, Stolz says, one can’t use formula $(1)$ unconditionally. A natural requirement for using it is the assumption that there is a $z_1$ such that $y$ is a continuous function of $z$ for all finite $z\ge z_1$ or for all finite $z\le z_1$. This, so far as he knows, can be assumed only when $x$ is a continuous function of $z$ for these values of $z$. But $\varphi$ has a continuous inverse $\psi$ only on intervals $(a-\delta,a)$ or $(a,a+\delta)$ (where $\delta>0$), and $(x_1,+\infty)$ or $(-\infty,x_1)$, on which $\varphi$ is monotonic. Moreover, $(1)$ is meaningful only when $f'(x)$ and $\varphi'(x)$ are not simultaneously $0$ or infinite. He then states a theorem that actually does follow from $(1)$ and Rouquet’s lemma: Theorem. Let $f(x)$ and $\varphi(x)$ be functions that are real-valued, continuous, and differentiable on one of the four intervals mentioned in the previous paragraph and are such that $f'(x)$ and $\varphi'(x)$ are not simultaneously $0$ or infinite. Suppose further that at least one of $f$ and $\varphi$ is monotonic on the interval and tends to $+\infty$ or $-\infty$ as $x\to a^-$, $x\to a^+$, $x\to+\infty$, or $x\to-\infty$ according to the type of interval. Then if $\lim\frac{f'(x)}{\varphi'(x)}$ exists, so does $\lim\frac{f(x)}{\varphi(x)}$, and the two limits are equal. He adds that if only $f(x)$ satisfies the conditions, e.g., if $\lim\varphi(x)$ is finite, then the signs of the infinite limits $\lim\frac{f'(x)}{\varphi'(x)}$ and $\lim\frac{f(x)}{\varphi(x)}$ can differ. Moreover, it is not necessary that both functions $f(x)$ and $\varphi(x)$ have limits; an example is given by $f(x)=\sin x$, $\varphi(x)=x^2$, $\lim_{x\to+\infty}\frac{f(x)}{\varphi(x)}=\lim_{x\to+\infty}\frac{f'(x)}{\varphi'(x)}=0$. This theorem, he says, agrees with that of du Bois-Reymond on p. $502$ of vol. $14$. He notes that it is also not difficult to derive it using his methods, so that it replaces the fourth theorem (on p. $238$) of his earlier paper. This is because the second theorem of that paper (on p. $234$) also holds when of the continuous functions $f(x)$ and $\varphi(x)$ only the latter is monotonic on $(x_1,+\infty)$ and therefore has an infinite limit as $x\to+\infty$; in a footnote he gives the necessary modifications to the proof. Finally, he points out that although the theorem is easily derived from Rouquet’s lemma, without any need for the lemmas in §$1$ of his earlier paper, those lemmas are nevertheless of some importance and cannot be derived from the theorem without the assumption that $\lim_{x\to+\infty}\frac{f'(x)}{\varphi'(x)}$ exists. In response to an observation by du Bois-Reymond, he concludes with a footnote justifying the observation on p. $232$ of his earlier paper that a continuous function $f(x)$ that has an infinite limit as $x$ approaches some limiting value actually has either $+\infty$ or $-\infty$ as its limit.
TDA - Persistence diagrams and Barcodes
Though there are many open source tools available for TDA ( javaplex, Gudhi, Dionysus ), the only problem is that all most all of these tools are currently in their nascent stage and poorly documented. Gudhi for instance, got "bottleneck distance" class quite recently. Try using Dionysus, it is an easy to understand python implementation with functions like bottleneck and Wasserstein distance which will be helpful in analyzing persistence diagrams.
Probability of Multiples
Let the probability of success be $p$, $0<p<1$. Let $X$ be a $\operatorname{Bin}(i,p)$ random variable. Then the quantity you're interested in is $$\mathbb P(X\geqslant n)=\sum_{k=n}^i \binom ik p^k(1-p)^{i-k}. $$ In particular, when $p=\frac15$, $n=2$, and $i=9$, we have $$ \begin{align*} \mathbb P(X\geqslant 2)&=\sum_{k=2}^9 \binom 9k \left(\frac15\right)^k\left(\frac45\right)^{9-k}\\&= 1 - \sum_{k=0}^2\binom 9k \left(\frac15\right)^k\left(\frac45\right)^{9-k}\\ &= 1 - \left(\binom 90\left(\frac15\right)^0\left(\frac45\right)^9 + \binom 91\left(\frac15\right)^1\left(\frac45\right)^8 \right)\\ &= 1 - \left(1\cdot1\cdot\left(\frac45\right)^9 +9\cdot\frac15\left(\frac45\right)^8 \right)\\ &= 1 - \left(\frac45\right)^8\left(\frac45 +\frac95 \right)\\ &= 1 - \frac{13\cdot4^8}{5^9}\\ &= \frac{1101157}{1953125}\\ &\approx 0.56379. \end{align*} $$
For the polynomial $P(z)=\sum_{n=1}^Na_nz^n$
Hints. As regards (d), we have that for $|z|<1$, $$|P(z)|\leq \sum_{n=1}^N|a_n||z|^n\leq \sum_{n=1}^N|a_n|.$$ Moreover for (c), consider the case when $P(z)=z$. For (b), do you know the open mapping theorem?
Steps in finding the carrying capacity K and the value of a
After cancelling $a$, you are left with the equation $$\tag{1} \frac{(1-(26.273/K))}{(1-(27.165/K))}=\frac{0.03274448}{0.03253040} = 1.0065809 $$ In order to remove numerical rounding errors, let's write this as $$\tag{2} \frac{1-\frac{a}{K}}{1-\frac{b}{K}}= c $$ You can multiply both sides by $1-\frac{b}{K}$ to get $$\tag{3} 1-\frac{a}{K} = c\left(1-\frac{b}{K} \right) = c - \frac{bc}{K} $$ Let's gather terms with $K$ to the left-hand side and the terms without $K$ to the right-hand side: $$\tag{4} \frac{bc}{K} - \frac{a}{K} = c-1 $$ $$\tag{5} \frac{bc-a}{K} = c-1 $$ Continuing, you can multiply both sides by $K$ to get $$\tag{6} bc-a = (c-1)K $$ and now divide by $c-1$ to get the answer: $$\tag{7} K = \frac{bc-a}{c-1} $$ Is it clear now?
Limit of sequence with relation $x_{n+1}=x_n-x_n^2$
We have $$\frac1{x_{n+1}}=\frac1{x_n(1-x_n)}=\frac1{x_n}+1+x_n+x_n^2+\cdots$$ (a geometric series). Thus $$\frac1{x_{n+1}}>\frac1{x_n}+1$$ and so $$\frac1{x_n}\ge n-1+\frac1{x_1}=n+1.$$ Therefore $x_n=O(1/n)$. Then $$\frac1{x_{n+1}}=\frac1{x_n}+1+O(1/n)$$ and so $$\frac1{x_n}=n+O(\ln n).$$ That's enough.
Is it possible for a polynomial to be integer-valued only for prime inputs?
No. Suppose there is such a polynomial $f(x)$ of degree $k$, say. First off, this polynomial has to have rational coefficients (by Lagrange interpolation, say). Let $N$ be the least common denominator of the coefficients. Then $g(x)=Nf(x)$ is a polynomial with integer coefficients and $g(2)$ is divisible by $N$. But $g(2-N)\equiv g(2)\equiv 0\pmod N$, so $g(2-N)$ is divisible by $N$, hence $f(2-N)$ is an integer. But clearly $2-N$ is not a prime, so we get a contradiction. Edit: for the sake of clarifying, here is how I have interpreted the question: is there a polynomial $f$ such that, for $n\in\mathbb Z$, $f(n)$ is an integer iff $n$ is prime? I show that there is no such polynomial. Edit 2: (in reply to Yves Daoust) Suppose that $f$ is a polynomial of degree $k$ which takes integer value at every prime. Pick any primes $p_1,\dots,p_{k+1}$. Consider polynomial $L(x)$ defined here for $x_i=p_i,y_i=f(p_i)$. Since every $\ell_i$ is a polynomial with rational coefficients of degree at most $k$, the same holds for $L$. Now note $f-L$ is a polynomial of degree at most $k$ which is zero at at least $k+1$ points $p_1,\dots,p_{k+1}$, hence it must be a zero polynomial. Thus $f=L$ has rational coefficients.
Show that there exists a matrix with trace in the set $\{1,2,...,n\}.$
Let $A\in U$. Since $U$ is finite, there exist $a,b\in\mathbb N$ such that $A^a=A^b$ and $a < b$. So the minimal polynomial of $A$ divides $x^a-x^b$. This shows that any eigen-value of $A$ is either $0$ or a root of unity. Consider the matrices $A^k$ for $k\in\mathbb N$. By this, the eigen-values of $A^k$ are the $k$-th powers of eigen-values of $A$, so we can find $k$ such that the eigen-values of $A^k$ are either $0$ or $1$. Then the trace of $A^k$ is a sum of $n$ numbers in $\{0,1\}$, so is an integer between $0$ and $n$. Hope this helps.
Is the induced homomorphism $i_*: \pi_1(A,a) \to \pi_1(X,a)$ the inclusion map if $i: A \to X$ is the inclusion?
Given a continuous map $f \colon X \to Y$ between (pointed) topological spaces, the induced map $f_* \colon \pi_1(X) \to \pi_1(Y)$ is always given by $f_*([\alpha]) = [f \circ \alpha]$. Here $\alpha \colon S^1 \to X$. So, also in the situtation in the question, $i_*([\alpha]) = [i \circ \alpha]$. Since $i$ is just an inclusion, it is quite reasonable and normal to identify $i \circ \alpha \colon S^1 \to X$ with $\alpha \colon S^1 \to A$ and to say that $i_*([\alpha]) = [\alpha]$. Do note, though, that $[\alpha]$ on the left is different from $[\alpha]$ on the right, as $[\alpha]$ on the right is really $[i \circ \alpha]$. I'm guessing that the use of $[\alpha]$ for two distinct things is what prompts the question in the title. For an arbitrary inclusion $A \subseteq X$, the map $i_*$ is not necessarily injective. Just consider the case where $A = S^1$ and $X = B^2$. Then $\pi_1(A) = {\mathbb Z}$ and $\pi_1(X) = 0$ and $i_*$ is just the $0$-map. But, if $A$ is a retract of $X$, then $i_*$ is injective, but you need an argument that actually uses that assumption. That argument is easy. The fact that $A$ is a retract of $X$ means that there is a continuous map $r \colon X \to A$ such that $r \circ i = \text{id}_A$. Then also $r_* \circ i_* = \text{id}_{\pi_1(A)}$ and hence $i_*$ is injective.
Differentiability of a function of two variables
The partial derivatives are equal to $0$. However, $$\lim_{h,k \to 0} \frac{f(0+h,0+k) - f(0,0) - 0h - 0k}{\|(h,k)\|_2} $$ Is not zero (it does not exist). So no, it's not differentiable.
convergence of the p-adic log in $pZ_p$
Over the $p$-adics, because of the ultrametric property, to show that a series $\sum_n a_n$ converges all that's required is to prove that $a_n\to0$. It is I hope clear that $v_p(n)\le\log_pn$ (this is the usual real logarithm to base $p$ not a $p$-adic logarithm). Thus $$v_p(x^n/n)\ge n-\log_pn\to\infty$$ for $x\in p\Bbb Z_p$. That is, $x^n/n\to0$ $p$-adically. This then implies that the series for the $p$-adic logarithm converges.
Is it fair to say that ZFC axioms can not even be stated in FOL?
No, that's not true at all. Let's start with the most basic of meta-theories: $\sf PRA$, the theory of Primitive Recursive Arithmetic. Here we have the natural numbers, and the primitive recursive functions. This theory is strong enough to internalise FOL, so we can just assume that we're manipulating strings. The point of an axiom schema is that it is a predicate that lets us recognise all the axioms which have a certain form. And any reasonable schema, including those of $\sf ZFC$, are in fact primitive recursive. In other words, there is a primitive recursive function $f_{\rm Sep}(n)$ which takes in $n$, and if $n$ is the Gödel number of a formula with some basic properties $\varphi$, then $f_{\rm Sep}(n)$ is the Gödel number of the axiom obtained from the schema by putting $\varphi$ into it. If $n$ is not the Gödel number of a suitable formula, just return the axiom for the formula $\varphi$ given by $x=x$ or something like that. Since $\sf ZFC$ is presented as finitely many axioms and one or two schemata (Separation and Replacement, but Replacement is generally enough to prove Separation, making it redundant), then the collection of Gödel numbers of axioms of $\sf ZFC$ is in fact primitive recursive. So we can really talk about the first-order theory that is $\sf ZFC$. To recap, the point of the schemata is to allow us to have infinitely many axioms with the same structure—that we can recognise mechanically—so that when we need to use any such axiom in a proof, we can always be sure that it is part of this or not. Another way to approach this is by saying that our foundation is in fact $\sf ZFC$. We use set theory to discuss set theory. This sounds circular, but how is this any different than using $\sf PRA$ to study the logical consequences of $\sf PRA$? It's not. Mathematics is not something that we do in vacuum, some assumptions are needed. And it is perfectly fine to study $\sf ZFC$ inside $\sf ZFC$. There, we have the notions of sets already existing, so we can talk about a set of axioms. Of course, we need to argue why a certain set exists, which is to say that we need to be able to prove that the set of these axioms actually exists. And again we use the fact that a schema is a function which takes formulas and returns axioms, so we may replace the schema with the axioms, as it is the range of this function. So again, we have that $\sf ZFC$ is a set of first-order axioms in the language of set theory.
Number of set, say $A$, of subset such that $\sum |A_i|=|\xi|$ and that $A_{ij}=A_{\ell k}\iff \ell=i\land k=j$
$$\binom{n}{k}B(k)$$ where $B(k)$ is the Bell number of $k$. That, is we count the number of ways to choose $k$ items, times the number the number of ways to partition a set of $k$ items.
Prove Base case $P(0)$ with three variables.
Answering my own question: I've approached the question wrong. I'm not supposed to have $i+1$ until after the base case of $0$. Base case $P(0)$ $a^0==b^0 \pmod m $ $1-1/m$ $0/m$ is possible as long as m is not a zero.
Evaluating Limit of an Integral (Real-Analysis)
Hint: $$\biggl|\,\int_0^1{\sin(n/x)\over n\sqrt x}\,dx\,\biggr|\le {1\over n}\int_0^1{\bigl|\sin(n/x)\bigr|\over \sqrt x}\,dx\le {1\over n}\int_0^1{1\over \sqrt x}\,dx={2\over n}.$$
Using metric to raise and lower indices
The terms "contravariant" and "covariant" refer to how vectors change when you move from one coordinate system to another. So just declaring a vector like $V$ or $[2,2]$ does not make it contra or covariant. It's what you do with it that matters. For example, let's say there is coordinate system representing the width and length of furniture in meters. A vector $[2,2]$ in this coordinate system could mean two different things: Firstly, it could be the dimensions of a piece of furniture: 2 meters by 2 meters. That would be a contravariant vector. The reason is that if you transform into a feet coordinate system, the numbers $[2,2]$ would get bigger $[6.28,6.28]$ while the measurements (aka basis vectors) get smaller (feet are smaller than meters). Since the numbers vary contrary to the bases, it's a contravariant vector. But $[2,2]$ could also represent a function for computing the perimeter of the furniture. If you had a desk that was 1.5m x 3m, you can find the perimeter by multiplying by $[2,2]$. Eg the perimeter is $[2,2] * [1.5,3] = 2*1.5 + 2*3 = 9$. To transform that function into feet (so you get the same answer), the vector $[2,2]$ would have to get smaller $[0.6,0.6]$. Since the numbers co-vary with the bases, it's a covariant vector. So you have to know what the vector does and how it transforms in order to determine whether it's contravariant or covariant. For the summing $\frac{\partial x^d}{\partial x'^b}$ is a 4x4 matrix. Multiplying any matrix by its inverse results in the identity matrix, which is the same as the Kronecker Delta $\delta$. Each entry in the result will be a sum with 4 parts, but they'll cancel out to leave either 1 or 0.
Convex optimization books, except with a focus on problem solving and formalism later?
Given your background in statistics perhaps you could look at Online Convex Optimisation, which is an area that combines convex optimization and regret minimisation in sequential learning problems. The resources in this area usually cover a lot of convex optimisation theory that is useful for solving actual problems in that area. Online learning is an area with a lot of open problems and has been an active area for research, but also covers a lot of practical problems you'd face in an industry setting, e.g. portfolio optimisation, recommender systems. Some links: Online Convex Optimization Tutorial and list of resources Elad Hazan's book on online convex optimization Online Learning notes by Francesco Orabona
Are all of the elements of a linearly independent set always linearly independent in the super set?
Yes. Linear independence in $U$ for say the set $u,v,w$ means $au+bv+cw =0_U$ exactly when $a=b=c=0$ (that's the zero scalar). Linear independence in $V$ for the same set $u,v,w$ means $au+bv+cw =0_V$ exactly when $a=b=c=0$. The two are equivalent because $U$ and $V$ always have the same identity vector if one contains the other. In other words $0_U=0_V$.
If $P$ is an invertible linear operator on a finite dimensional vector space over a finite field, then there is some $n>0$ such that $P^n=I$
I'll make a try: Take $P,P^2,P^3,...$. Then these can't be all different because the have entries from a finite field. Then $P^t=P^s\Rightarrow P^n=I$ since $P$ is invertible
quotient top., continuous and bijective map
It basically follows from the fact that $e: \Bbb R \to \Bbb C\simeq \Bbb R^2; e(x)= \exp(2\pi ix)$ obeys $e(x)=e(y)$ iff $x - y = k 2 \pi$ for some $k \in \Bbb Z$, e.g. from properties of the sine/cosine, etc. Then we can conclude that $f(x,y)=f(x',y')$ iff $e(x) = e(x')$ and $e(y)=e(y')$ iff $x -x' \in \Bbb Z$ and $y-y' \in \Bbb Z$ and the latter can only happen (as $x,y \in [0,1]$) when $x,x' \in \{0,1\}$ and $y,y' \in \{0,1\}$ or $(x,y) \sim (x',y')$. The implication $(x,y) \sim (x',y') \to f(x,y)=f(x',y')$ (the same class implies the same value),for $f$ considered to be defined on $A=[0,1]^2$, gives us that $f$ is well-defined on $B=A{/}\sim$, because the choice of representative of a class does not affect the function value on that class. The universal property of quotients than gives that $f$ as defined on $B$ is continuous iff $f$ defined on $A$ is, which it is, as all component functions are (even) differentiable. The implication $f(x,y)=f(x',y') \to (x,y) \sim (x',y')$ gives us the injectivity when defined on $B$: the same value implies the same class. Compactness of $A$ and thus $B$ plus Hausdorffness of $S^1 \times S^1$ implies that we have a homeomorphism between $B$ and $S^1 \times S^1$.
Integration using Monte Carlo Method
Recall that if $Y$ is a random variable with density $g_Y$ and $h$ is a bounded measurable function, then $$\mathbb E[h(Y)] = \int_{\mathbb R} h(y)g_Y(y)\,\mathsf dy. $$ Moreover, if $Y\sim\mathcal U(0,1)$, then $a+(b-a)U\sim\mathcal U(a,b)$. So applying the change of variables $x=a+(b-a)u$ (with $a=0$, $b=\pi$) to the given integral, we have $$I = \int_0^1 \frac{\pi}{\sqrt{2\pi}} e^{-\frac12\sin^2 (\pi u) }\,\mathsf du=\int_0^1 h(u)\,\mathsf du, $$ with $h(u)=\sqrt{\frac\pi 2} e^{-\frac12\sin^2 (\pi u) }$. It follows then that $I=\mathbb E[h(U)]$ with $U\sim\mathcal U(0,1)$. Let $U_i$ be i.i.d. $\mathcal U(0,1)$ random variables and set $X_i=h(U_i)$, then for each positive integer $n$ we have the point estimate $$\newcommand{\overbar}[1]{\mkern 1.75mu\overline{\mkern-1.75mu#1\mkern-1.75mu}\mkern 1.75mu} \widehat{I_n} =: \overbar X_n= \frac1n \sum_{i=1}^n X_i$$ and the approximate $1-\alpha$ confidence interval $$\overbar X_n\pm t_{n-1,\alpha/2}\frac{S_n}{\sqrt n}, $$ where $$S_n = \sqrt{\frac1{n-1}\sum_{i=1}^n \left(X_i-\overbar X_n\right)^2} $$ is the sample standard deviation. Here is some $\texttt R$ code to estimate an integral using the Monte Carlo method: # Define "h" function hh <-function(u) { return(sqrt(0.5*pi) * exp(-0.5 * sin(pi*u)^2)) } n <- 1000 # Number of trials alpha <- 0.05 # Confidence level U <- runif(n) # Generate U(0,1) variates X <- hh(U) # Compute X_i's Xbar <- mean(X) # Compute sample mean Sn <- sqrt(1/(n-1) * sum((X-Xbar)^2)) # Compute sample stdev CI <- (Xbar + (c(-1,1) * (qt(1-(0.5*alpha), n-1) * Sn/sqrt(n)))) # CI bounds # Print results cat(sprintf("Point estimate: %f\n", Xbar)) cat(sprintf("Confidence interval: (%f, %f)\n", CI[1], CI[2])) For reference, the value of the integral (as computed by Mathematica) is $$e^{-\frac14}\sqrt{d\frac{\pi }{2}} I_0\left(\frac{1}{4}\right) \approx 0.991393, $$ where $I_\cdot(\cdot)$ denotes the modified Bessel function of the first kind, i.e. $$I_0\left(\frac14\right) = \frac1\pi\int_0^\pi e^{\frac14\cos\theta}\,\mathsf d\theta. $$
What is a 1-graphic matroid?
This is a slightly unusual way to formulate the ATSP as the intersection of three matroids. Let me first give you the usual way, which may help clarify things. (A source for this is Chapter 8 of Combinatorial Optimization: Networks and Matroids by Eugene Lawler.) First suppose we are looking for an open tour that starts at node $1$, ends at node $n$, and visits all other nodes. We assume there are no edges into node $1$ or out of node $n$. Such tours are exactly the maximal ($(n-1)$-edge) elements of the intersection of the following three matroids: The partition matroid whose independent sets are all the edge sets with at most $1$ edge into every node. (It's a partition matroid because we partition the edge set according to the target vertex of an edge, and the independent sets pick at most one edge from every part of the partition.) The partition matroid whose independent sets are all the edge sets with at most $1$ edge out of every node. The graphic matroid of the underlying undirected graph. This is a standard definition: the independent sets of this matroid are all the forests in the graph (so the maximal independent sets are the spanning trees). If we want a closed tour, we can reduce it to the version above as follows. Split node $1$ of an $n$-node graph into nodes $1'$ and $n+1$, where node $1'$ keeps all the outgoing edges of node $1$, and node $n+1$ keeps all the incoming edges. Then, find open tours from $1'$ to $n+1$. Of course, there is a bijection between the edges of the $n+1$-node graph we found, and the $n$-node graph we started with, so there is also a correspondence between edge sets in the $n+1$-node graph and the $n$-node graph. So we could define three matroids for a closed tour directly: The definitions of the partition matroids remain the same. Both of them. The matroid corresponding to the graphic matroid now has the following independent sets: subgraphs which are either acyclic or contain a unique cycle through node $1$. I assume that your slightly nonstandard definition has, as its matroid in (iii), all subgraphs which are either acyclic or contain any one cycle. (We are still looking at the undirected graph here.) These subgraphs are of course not all forests, but you can see how the confusion arises, because they are inspired by a situation where they were all forests.
Prime ideals of height less than the dimension
If the union of primes equals $\mathfrak{m}$, then $\mathfrak{m}$ is contained in their union and by using the prime avoidance lemma it must be contained in one of these primes, contradiction.
Given ~(A->B) how would I reach the conclusion of A&~B
To reach $A\&\lnot B$, prove $A$ and $\lnot B$ separately, then use conjunction introduction. To prove them use a reduction to absurdity and an indirect proof, respectively. Both require deriving a contradiction from an assumption, and the only other thing to contradict is the premise, which is a negation of a conditional), so derive that conditional... The exact rules and format you use will depend on your proof system, but the skeleton will basically be: $$\def\fitch#1#2{~~~~\begin{array}{|l}#1\\\hline#2\end{array}}\fitch{\lnot (A\to B)}{\fitch{\lnot A}{\fitch{A}{~\vdots\\B}\\A\to B\\\bot}\\\lnot\lnot A\\A\\\fitch{B}{\fitch{A}{~\vdots\\B}\\A\to B\\\bot}\\\lnot B\\A\,\&\,\lnot B}$$
Limit and L'Hopitals
If you know that $\lim_{x \to \infty} (1+\frac{1}{x})^x = e$, then $$\lim_{n \to \infty}\left(1+\frac{3}{n}\right)^n=\lim_{n \to \infty} \left(1+\frac{1}{n/3}\right)^{n/3 \cdot 3} = e^3.$$ If you don't know that, then try the following. \begin{align*} \lim_{n \to \infty} \ln \left(1+\frac{3}{n}\right)^n &= \lim_{n \to \infty} n \ln\left(1+\frac{3}{n}\right)\\ &= \lim_{n \to \infty} \frac{\ln\left(1+\frac{3}{n}\right)}{1/n} & \text{"$\infty/\infty$"}\\ &= \lim_{n \to \infty} \frac{\frac{-3/n^2}{1+\frac{3}{n}}}{-1/n^2} & \text{l'Hôpital}\\ &= \lim_{n \to \infty} \frac{3}{1+\frac{3}{n}}\\ &= 3. \end{align*} Then, by continuity, $$\lim_{n \to \infty} \left(1+\frac{3}{n}\right)^n=\lim_{n \to \infty}e^{\ln \left(1+\frac{3}{n}\right)^n}=e^{\lim_{n to \infty} \ln \left(1+\frac{3}{n}\right)^n}=e^3.$$
Cardinality of the union of all repeated Cartesian products of N with itself
This set is countable. Let $\mathbb{N}^k$ denote the $k$ fold cartesian product of $\mathbb{N}$. By induction, $\mathbb{N}^k$ is countable. Hence $$\bigcup_{k \in \omega} \mathbb{N}^k$$ is countable as it is a countable union of countable sets.
Common Conjugate Diameters of Conics
These equations are invariant by transformation $x \to -x, y \to -y$. As can be seen on figure below they represent ellipses centered at the origin. Thus, we can assume that at least one of the common conjuguate diameters has equation $y=ax$ (see remark below). Consequently, parallel lines to this diameter have equations: \begin{equation} y=ax+b \end{equation} In order to know the abscissas of the intersection of these parallels with each ellipse, we plug the above expression of $y$ into their equations, giving resp. \begin{equation}x^2+4x(ax+b)+6(ax+b)^2=1 \ \iff (6a^2+4a+1)x^2+4b(3a+1)x+(6b^2-1)=0\end{equation} \begin{equation}2x^2+2x(ax+b)+3(ax+b)^2=\frac13 \ \iff \ (3a^2+2a+2)x^2+2b(3a+1)x+(3b^2-\frac13)=0\end{equation} Taking the half sum of the roots in each equation (classical formula : the sum of the roots of a quadratic equation $Ax^2+Bx+C=0$ is $-B/A$), we obtain the midpoints' abscissas. As these abscissas must coincide for both ellipses, we must have : \begin{equation}\dfrac{2(3a+1)b}{6a^2+4a+1}=\dfrac{(3a+1)b}{3a^2+2a+2}\end{equation} (for all $b$ within a certain range) giving evident solution \begin{equation}a=-1/3\end{equation} The first diameter has thus equation $y=ax=-x/3$. The set of midpoints is constituted by points \begin{equation} x=\dfrac{2(3a+1)b}{6a^2+4a+1} , \ \ y=ax+b \end{equation} which is plainly (don't forget that $a=-1/3$) the set of points $(x=0,y=b)$, i.e. the vertical axis (see figure). Remark : form $y=ax$ excludes vertical lines. But there is no loss of generality in our solution : if there is a vertical diameter (we have seen that this can happen...), we switch to its conjugate diameter which cannot be vertical too. Important remark : If you are acquainted with linear algebra, this issue (finding conjugate diameters) is identical to the classical question of simultaneous diagonalization of quadratic forms. In both cases, you have to ''skew'' axes (the ''skewing'' being achieved by a matrix $S$) in order that one of the conics looks, with respect to these new axes (and with a certain imagination...) as a circle and the other as an ellipse with the new axes as its proper axes. This can be seen in the following way. Let $A$ and $B$ be the matrices associated to the LHS of the equations of these conic sections, and $S$ the linear transform associated to the ''skewing'' effect, i.e., a change of axes (former $x$ axis being transformed into the axis directed by vector $\binom{\ 3}{-1}$ ; ordinate axis unchanged): \begin{equation}A=\begin{pmatrix}1 & 2\\ 2 & 6\end{pmatrix}, \ \ \ \ \ \ B=\begin{pmatrix}6 & 3\\ 3 & 9\end{pmatrix}, \ \ \ \ \ \ S=\begin{pmatrix}\ \ 3 & 0\\ -1 & 1\end{pmatrix}. \end{equation} Then $A$ and $B$ become diagonalized, with respect to this new basis, under the form: \begin{equation} A_1=S^TAS=\begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix}=I, \ \ \ \ \ \ \ \ \ B_1=S^TBS=\begin{pmatrix}15 & 0\\ 0 & \tfrac32\end{pmatrix}. \end{equation} The square roots of the coefficients found in the diagonal of $B$: $\sqrt{15}$ and $\sqrt{\tfrac32}$ are the enlargment/shrinking factors to be used when one must pass from an ellipse to the other one. Here we have only checked that the issue was the same (we had found previously the coefficients of $S$). But one can build an efficient method to obtain directly the coefficients of $S$ with matrix computations only. See for example the first solution to this question: A property of positive definite matrices
How to understand "a.s. convergence does not come from a metric, or even from a topology"?
Let {X_n} be a sequence of random variables which converges in probability but not a.s. Take any subsequence of this sequence. The subsequence also converges in probability. This implies that there is a further subsequence which converges a.s. According to Theorem 2.3.3, if there is a topology on the space of all random variables on a given probability space in which convergence of a sequence is equivalent to a.s. convergence, then the original sequence {X_n} must itself converge a.s., which is not true.
Solve the Recurrence : $T(n)=3T(n/3) +\frac{n}{\log(n)}$.
Using master theorem case 2b, as $\frac{n}{\log(n)} = \Theta(n\log^{-1}(n))$ ($c_{crit} = 1$ and $k = -1$), we have $T(n) = \Theta(n\log\log(n))$. To know more, you can find this paper useful.
Can we say anything regarding the derivative at zero of a holomorphic map from the unit disk to itself without a condition of Schwarz's lemma?
The Schwarz–Pick theorem is a generalization of the Schwarz Lemma. If $f$ is a holomorphic function from the unit disk into itself then $$ \frac{|f'(z)|}{1-|f(z)|^{2}} \leq \frac{1}{1-|z|^{2}} \, . $$ In particular, $$ |f'(0)| \le 1-|f(0)|^{2} \le 1 \, . $$ Equality can only hold if $f(0) = 0$ and consequently, $f(z)=az$ for a constant $a$ with $|a|=1$.
Complex Analysis analytic function
The answer is yes. Setting $f = u + iv$, look at the Cauchy-Riemann equations for $f$: $u_x = v_y, \tag{1}$ $u_y = -v_x. \tag{2}$ Since $\text{Im} f = v = 71$, $v_x = v_y = 0$; thus by (1), (2) $u_x = u_y = 0$ as well. But this just says $\nabla u = 0$, so $u$ must be a real constant, call it $C \in \Bbb R$. Thus $f = C + 71i$. Hope this helps. Cheerio, and as always, Fiat Lux!!!
Computing $\lim\limits_{n\to\infty}\int_X ~{\left\{\cos\left(\pi f(x)\right) \right\}}^{2n}~\text{d}\mu(x)$
Put $f_n(x)=\left\{\cos (\pi f(x))\right\}^{2n}$. Then the sequence $\{f_n\}$ is decreasing, and $0\leq f_1\leq 1$, which is integrable. Hence we can apply the reversed version of the monotone convergence theorem, which gives $$\lim_{n\to\infty}\int_X f_n(x)d\mu(x)=\int_X \lim_{n\to\infty}f_n(x)d\mu(x).$$ Since $\left\{\cos(\pi f(x))\right\}^2=1$ if $f(x)\in\mathbb Z$, and $\left\{\cos(\pi f(x))\right\}^2<1$ otherwise, we have $$\lim_{n\to\infty}f_n(x)=\begin{cases}1&\mbox{if }f(x)\in\mathbb Z\\ 0&\mbox{otherwise},\end{cases}$$ so $$\lim_{n\to +\infty}\int_X\left\{\cos (\pi f(x))\right\}^{2n}d\mu(x)=\int_X\mathbf 1_{\left\{f(x)\in\mathbb Z\right\}}d\mu(x)=\sum_{k\in\mathbb Z}\mu(\left\{x\in X,f(x)=k\right\}).$$
Which functions make it true? $(f(x)+g(x))^{-1} = f^{-1}(x) + g^{-1}(x)$
If you don't mind complex solutions, you can do it: for any $b \ne 0$ take $$ f(x) = b x,\ g(x) = b \omega x$$ Thus $$ f^{-1}(x) = x/b, \ g^{-1}(x) = x/(b \omega)$$ and $$ f(x) + g(x) = b (1+\omega) x, \ (f+g)^{-1}(x) = x/(b (1+\omega))$$ So $(f+g)^{-1} = f^{-1} + g^{-1}$ if $$ \frac{1}{1+\omega} = 1 + \frac{1}{\omega} $$ which is true if $1 + \omega + \omega^2 = 0$. The roots of this are $\omega = \frac{1 \pm \sqrt{3} i}{2}$, the primitive cube roots of $-1$.
Show that if $f$ is a uniformly continuous function on $\mathbb{R}$ and $f\in L^1(\mathbb{R})$, then $f$ is bounded and $\lim_{|x|\to\infty}f(x)=0$.
It is sufficient to consider positive $x$ and also to assume that $f$ is positive (since only $|f| $ is relevant for the $L^1$ norm). Assume $f$ does not converge to $0$. Then there is $\varepsilon > 0$ and a sequence $x_n\rightarrow \infty$ such that $f(x_n) \ge \varepsilon$. Without log of generality $x_{n+1}> x_n+2$. Since $f$ is uniformly continuous, there is $\delta > 0$ such that $|x-y|<\delta \Rightarrow |f(x)-f(y)| < \frac{\varepsilon}{2}$. Wlog $\delta < 1$. In particular, $f\ge \varepsilon/2$ in a $\delta$-neighbourhood of each $x_n$. By construction, the intervals $(x_n- \delta, x_n+\delta)$ are pairwise disjoint. So $$\infty >|f|_{L^1} = \int f \ge \sum_1^\infty\int_{(x_n- \delta, x_n+\delta)} f \ge \sum_1^\infty \varepsilon\delta $$ which is a contradiction. So $\lim_{x\rightarrow\infty} f(x) = 0$ (similar for negative $x$). Since, as a continuous function, $f$ is bounded on each compact interval, $f$ is bounded on all of $\mathbb{R}$
If p, q, r are the roots of $x^3-6x^2+3x+1=0$ determine the possible values of $p^2q+q^2r+pr^2$.
Hint: Let \begin{eqnarray*} A=p^2q+q^2r+r^2p \\ B=pq^2+qr^2+rp^2. \end{eqnarray*} Now calculate $A+B$ and $AB$ ... and solve the quadratic.
An identity involving binomial coefficients and rational functions
This identity has appeared on MSE on several occasions in various forms. There is a proof using residues which goes like this (quoted from what should be earlier posts). Start with the function $$f(z) = n! (-1)^n \frac{p}{z+p} \prod_{q=0}^n \frac{1}{z-q}.$$ We then get $$\mathrm{Res}_{z=k} f(z) = n! (-1)^n \frac{p}{k+p} \prod_{q=0}^{k-1} \frac{1}{k-q} \prod_{q=k+1}^n \frac{1}{k-q} \\ = n! (-1)^n \frac{p}{k+p} \frac{1}{k!} \frac{(-1)^{n-k}}{(n-k)!} = (-1)^k \frac{p}{k+p} {n\choose k}.$$ Residues sum to zero and hence we have $$\sum_{k=0}^n (-1)^k \frac{p}{k+p} {n\choose k} = - \mathrm{Res}_{z=\infty} f(z) - \mathrm{Res}_{z=-p} f(z).$$ Observe that $\lim_{R\to\infty} 2\pi R/R/R^{n+1} = 0$ and the residue at infinity is zero. It remains to compute $$- \mathrm{Res}_{z=-p} f(z) = - n! (-1)^n \times p \times \prod_{q=0}^n \frac{1}{-p-q} \\ = n! \times p \times \prod_{q=0}^n \frac{1}{p+q} = n! \times p \times \frac{(p-1)!}{(p+n)!} = {n+p\choose p}^{-1}.$$ This concludes the argument. Here we require that $-p$ not be among the poles in $[0,n],$ resulting in a singularity in the sum.
showing inequality is strict in integral inequality
I assume you mean you want $\left| \int_{\mathbb R} F(x) e^{-ix}\; dx\right| < 1$. Take $\theta$ so that $\left| \int_{\mathbb R} F(x) e^{-ix}\; dx\right| = \text{Re} \;e^{i\theta} \int_{\mathbb R} F(x) e^{-ix}\; dx$. Now $\text{Re} \; e^{i\theta} \int_{\mathbb R} F(x) e^{-ix}\; dx = \int_{\mathbb R} F(x) \cos(x-\theta)\; dx$. But if that is $1$, then $\int_{\mathbb R} F(x) (1 - \cos(x-\theta))\; dx = 0$, which implies that $F(x) = 0$ almost everywhere and thus that $\int_{\mathbb R} F(x)\; dx = 0$, contradiction.
AAA similarity Theorem.
Dilate one of the triangles until one of its sides is the same length as the corresponding side of the other triangle. Dilation preserves angle measures, so they still have all their angles equal. It follows from the ASA postulate that the triangles are now congruent (and hence that the original triangles were similar).
Compute $\lim_{x\to\infty} x \lfloor \frac{1}{x} \rfloor$
You make it sound like the reason that $x\lfloor\frac{1}{x}\rfloor\leq\lfloor\frac{1}{x}\rfloor$ is true for positive $x$ is because $xy\leq y$ for positive $x$ and $y$, but this is not so. Instead, the reason it is true is because if $x\leq 1$ then $xy\leq y$ for positive $y$ and if $x>1$ then $\lfloor\frac{1}{x}\rfloor=0$ so that $x\lfloor\frac{1}{x}\rfloor=0$ too. It is far simpler just to note that if $x>1$ then $\lfloor\frac{1}{x}\rfloor=0$ and hence $x\lfloor\frac{1}{x}\rfloor=0$, giving us $\lim_{x\to\infty}x\lfloor\frac{1}{x}\rfloor=0$.
Determine the number of integral points on hypotenuse of a right triangle
Contrary to first appearance, that's not a simple problem at all, either if the intercepts on the axes (the sides of the triangle) are integral or not. The problem is to find the solutions to $$ \bbox[lightyellow] { \left\{ \matrix{ 0 \le x,y \in \mathbb Z \hfill \cr {x \over a} + {y \over b} = 1 \hfill \cr} \right. }$$ Clearly, if only $a$ or only $b$ are irrational there are no solutions. But also when when they are rational there might be no solutions, e.g. for $a=b=7/2$. Let's examine the various cases in detail. $a$ and $b$ irrational As already said if only one them is irrational there cannot be any solution. If both are irrational, also in general there is no solution, unless in the special case in which we can write $$ \bbox[lightyellow] { \eqalign{ & {{x - a} \over {x_{\,0} - a}} = {x \over {x_{\,0} - a}} - {a \over {x_{\,0} - a}} = {y \over {y_{\,0} }}\quad \Rightarrow \cr & \Rightarrow \quad {x \over a} - {y \over {y_{\,0} {a \over {x_{\,0} - a}}}} = 1\quad \Rightarrow \quad b = y_{\,0} {a \over {x_{\,0} - a}} \quad \left| {\;0 \le x_{\,0} ,y_{\,0} \in \mathbb Z} \right. \cr} } \tag{1}$$ when there is the only solution $(x_0,y_0)$ in fact. $a$ and $b$ integer In this case the number of points is given by $$ \bbox[lightyellow] { N = 1 + \gcd (a,b) } \tag{2}$$ because the step $(\Delta x, \Delta y)$ between two solutions shall be such that $| \Delta y/ \Delta x |=b/a$ and the number of steps $k$ be such that $k|\Delta x|=a$ and $k|\Delta y|=b$. $a$ and $b$ rational When $a$ and $b$ are rational, with simple passages we can reduce to $$ \bbox[lightyellow] { \left\{ \matrix{ 0 \le x,y,n,m,q \in \mathbb Z \hfill \cr n\,x + m\,y = q \hfill \cr} \right. }$$ If $x$ and $y$ could be also negative, then the above linear diophantine equation can be solved by the extended Euclidean algorithm, subject to $$ \bbox[lightyellow] { {\rm lcm}(a,b) = q\quad \Leftrightarrow \quad \gcd \left( {n,m} \right)\backslash q }$$ In the set of the solutions $\{(x_k,y_k)\}$ arising from the above, then you shall determine which, if any, are the couples with non-negative values. The number of such non-negative solutions comes under the denomination of Restricted Partition Function $p_{\{n,m}\}(q)$, that is the number of partitions of $q$ containing only parts belonging to a given set $S$, in this case $S=\{n,m\}$. This function is a building block in the Representability Problem or Frobenius Coin Problem. The ogf of $p_{\{n,m}\}(q)$ is $$ \bbox[lightyellow] { {1 \over {\left( {1 - z^n } \right)\left( {1 - z^m } \right)}} } \tag{3}$$ and $p_{\{n,m}\}(q)$ can also be expressed, thanks to Popoviciu's theorem, as $$ \bbox[lightyellow] { p_{\{ n,m\} } (q) = {q \over {nm}} - \left\{ {{{n^{( - 1)} q} \over m}} \right\} - \left\{ {{{m^{( - 1)} q} \over n}} \right\} + 1\quad \left| {\;\gcd (n,m) = 1} \right. }\tag{4}$$ where $$ \bbox[lightyellow] { \left\{ \matrix{ \left\{ x \right\} = x - \left\lfloor x \right\rfloor \hfill \cr n^{( - 1)} n \equiv 1\quad \left( {\bmod m} \right) \hfill \cr m^{( - 1)} m \equiv 1\quad \left( {\bmod n} \right) \hfill \cr} \right. }$$
Type III Von Neumann algebras and spectra of the modular operators.
To try to make things simple (they are not; modular theory and Connes work on it are very very far from trivial), if $M$ is semifinite then the modular operator is the identity. That is, at least when talking about factors the modular operator is non-trivial only on factors of type III. That said, the goal of the paper is not that much to show that the factors are to type III, but to show that the factors are of type III_1 (I'm not entirely sure if the algebras in the paper are factors, but at least in talking about factors I'm more sure I'm saying the right thing). This has to do with A. Connes classification. Among many many other things, Connes proved that the set $$ \Gamma(M)=(0,\infty)\cap\,\bigcap\{\operatorname{sp}\Delta_\phi:\ \phi\ \text{ is a fns weight on }M\} $$ is a closed multiplicative group of $(0,\infty)$. The only possibilities are $\Gamma(M)=\{1\}$; if $M$ is also type III, we say that $M$ is of type III$_0$ $\Gamma(M)=\{\lambda^n:\ n\in\mathbb Z\}$ for some $\lambda\in(0,1)$; we say that $M$ is of type III$_\lambda$ $\gamma(M)=(0,\infty)$; we say that $M$ is of type III$_1$. When $M$ is semifinite, you always have $\Gamma(M)=\{1\}$. So if you can show that the spectrum of $\Delta_\phi$ is $(0,\infty)$, then $M$ is of type III$_1$.
Can it be defined as inner product of two vectors?
$\def\\#1{{\bf#1}}$An inner product should have the property that if $\\x\cdot\\x=0$ then $\\x=\\0$. This is not true for your example (except in the case $n=1$). For example, if $\\x=(1,-1,0,0,\ldots)$, then $$\\x\cdot\\x=(1-1+0+\cdots)(1-1+0+\cdots)=0\ ,$$ but $\\x\ne\\0$.
A line is perpendicular on a plane if and only if it is perpendicular on every line from that plane.
Quite frankly, the claim that this is false seems to be fake news. This definition is accurate. Although, if you are working in non-Euclidean geometry, then I suppose you could alter the definition of perpendicular points to somehow force the definition to be true. Clarification (special thanks to @Aretino): A line is perpendicular to a plane if it has only one point in common with the plane and is perpendicular to every line on the plane passing through that point.
How do I deal with $\cos(n\pi /4)$ in this series?
We can evaluate the series in closed form by using Euler's Formula to write $\cos(\pi k/4)=\text{Re}(e^{i\pi k/4})$. Proceeding, we find that $$\begin{align} \lim_{n\to \infty}\sum_{k=0}^n\frac{\cos(\pi k/4)}{2^k}&=\lim_{n\to \infty}\text{Re}\left(\sum_{k=0}^n\left(\frac{e^{i\pi/4}}{2}\right)^k \right)\\\\ &=\lim_{n\to \infty}\text{Re}\left(\frac{1-\frac1{2^{n+1}}e^{i\pi(n+1)/4}}{1-\frac12e^{i\pi/4}}\right)\\\\ &=\frac{4-\sqrt2}{5-2\sqrt2}\\\\ &=\frac1{17}(16+3\sqrt 2) \end{align}$$
Singular support of a distribution is {0}.
If $u \in \mathcal{D}'(\Omega)$ is a distribuction, the singular support $\mathrm{singsupp}(u)$ of $u$ is the set of all points $x \in \Omega$ such that does not exist open neighborhood $U\subset \Omega$ where $u_{|U}$ is a smooth function \begin{align*} \displaystyle \mathrm{singsupp}(u) = \lbrace x \in \Omega : \nexists U \subset \Omega : u_{|U} \in \mathcal{E}(U) \rbrace \end{align*} in other words if $U_{max}$ is the maximum open in $\Omega$where the distribuction $u \in \mathcal{E}(U_{\max})$ then \begin{align*} \displaystyle \mathrm{singsupp}(u) = \Omega \setminus U_{max} \end{align*} Note that since $u=0$ in the largest open $G \subset \Omega$,then $\mathrm{singsupp}(u) \subset \mathrm{supp}(u)$. So in general I think you can not apply the theorem, there is little information.
Arrangements of the word HULLABALOO
Yes, these are all correct - standard approaches to solving these types of problems.
Derivative of $F(x)=x^2\sin(1/x^2)$ exists for all x, but fails to be (Lebesgue) integrable.
The issue with integrability is not that $F'$ is undefined at $x = 0$. As you claim and is in fact easy to show, $F'(0) = 0$. Clearly $F'$ is unbounded in the vicinity of $x = 0$ due to the term $\frac{2}{x} \cos \frac{1}{x^2}$, although this by itself does not preclude the existence of the Lebesgue integral. For example $x \mapsto 1/\sqrt{x}$ is unbounded but Lebesgue integrable on $[0,1]$. Nevertheless, $x \mapsto \frac{2}{x} \cos \frac{1}{x^2}$ fails to be Lebesgue integrable since using a change of variables $x = 1/\sqrt{u}$ we have $$\int_0^1\frac{2}{x} \left|\cos \frac{1}{x^2}\right| \, dx = \int_1^\infty\frac{|\cos u|}{u} \, du > \sum_{k=1}^\infty\int_{\pi k}^{\pi (k+1)}\frac{|\cos u|}{u} \, du \\ > \sum_{k=1}^\infty\frac{1}{\pi(k+1)}\int_{\pi k}^{\pi (k+1)}|\cos u| \, du = \sum_{k=1}^\infty\frac{2}{\pi(k+1)} = +\infty$$ Recall that a function $f$ is Lebesgue integrable over $E$ if and only if $\int_E |f| < \infty$.