title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
why dot product and cross product so weird?
In 2D, they are not so weird as you think. Write the vectors as complex numbers, $2+3i$ and $5+3i$ (where $i$ is the usual imaginary number, not your $\vec i$). The "ordinary" product is $$2\cdot5-3\cdot3+(2\cdot3+3\cdot5)i,$$ very close to the product of the first number by the conjugate of the second, $$2\cdot5+3\cdot3+(2\cdot3-3\cdot5)i,$$ where you recognize both the dot- and the cross-products ! By the way, you say that for addition we are just "adding magnitudes", but this is not true, the magnitude of the sum can be smaller than the sum of the magnitudes. This whole "complication" (which is in fact a wonderful source of richness) comes form the fact that vectors need not have the same direction and do not add/multiply like scalars.
Diophantine Equation problem: Find all pairs of integers (a,b) such that $ab|a^{2017} + b$
Let $\nu_p$ denote the $p$-adic order, i.e. $\nu_p(x)$ is the largest $i$ such that $p^i \mid x$. Any prime that divides $a$ divides $b$, and any prime that divides $b$ divides $a$. If $2017 \nu_p(a) \ne \nu_p(b)$, then $\nu_p(a^{2017}+b) = \min(2017 \nu_p(a), \nu_p(b) \le \nu_p(b)$, while $\nu_p(ab) = \nu_p(a) + \nu_p(b)$, so it's impossible for $ab \mid a^{2017} + b$. Thus we must have $2017 \nu_p(a) = \nu_p(b)$. Since this is true for all $p$ dividing $a$ or $b$, $b = a^{2017}$. Now our condition is $$ a^{2018} \mid 2 a^{2017}$$ and so the only solutions are $a = b = 1$ and $a = 2$, $b= 2^{2017}$.
Clopen subsets of a compact metric space
It seems that we can show the claim as follows. It is well known and easy to prove that each metrizable compact space $X$ has a countable base. Fix such a base $\mathcal B$. Let $U$ be a clopen (that is, closed and open) subset of $X$. Each point $x\in U$ has a neighborhood $U_x\in\mathcal B$ such that $U_x\subset U$. Since $U$ is compact, there exists a finite subset $Y$ of $U$ such that $U=\bigcup\{U_x:x\in Y\}$. Hence the cardinality of the family of all clopen subsets of $X$ is not greater than the cardinality of the family of all finite subsets of $\mathcal B$, which is countable.
Killing Fields on Euclidean Spaces
We need the following lemma, whose proof will be at the bottom of this answer. Lemma: If $\phi, \psi$ are Riemannian isometries on open subsets $U \to V$ of $\mathbb R^n$, with $\phi(p) = \psi(p)$ and $d\phi_p = d\psi_p$ for some $p \in U$, then $\phi = \psi$. Now, suppose $V$ is a Killing vector field vanishing at $0$ in the kernel of the map $T : K \to \mathfrak o(n)$. Since $V$ vanishes at $0$, if $\theta$ is the flow of $V$, $\theta_t(0) = 0$ for every $t$ in the flow domain. We claim $d(\theta_t)_0$ is independent of $t.$ For then, by the lemma, $\theta_t = \mathrm{Id}$ on all of $\mathbb R^n$, and so $V = 0$. Let $\theta_t^i$ be the $i^\textrm{th}$ component of the flow $\theta_t$. Then $\dfrac{d}{dt}\bigg|_{t=0} \theta^i_t(x) = V^i(x)$ for $x \in \mathbb R^n$, where $V^i$ are the component functions of $V$. To prove the claim, it suffies to show that $\dfrac{\partial \theta_t^i}{\partial x^j}(0)$ is independent of $t$ for every $i,j$. This follows from Clairaut's theorem: for every $t_0$ in the flow domain, we have $$ \frac{d}{dt}\bigg|_{t=t_0} \frac{\partial}{\partial x^j} \theta_t^i(0) = \frac{\partial}{\partial x^j} \frac{d}{dt}\bigg|_{t=t_0} \theta_t^i(0) = \frac{\partial}{\partial x^j} \frac{d}{dt}\bigg|_{t=0} \theta^i_t(\theta_{t_0}(0)) = \frac{\partial}{\partial x^j} V^i(\theta_{t_0}(0)) = \frac{\partial V^i}{\partial x^j}(0) = 0 $$ since $V \in \ker(T)$. Therefore $\dfrac{\partial \theta_t^i}{\partial x^j}(0)$ is independent of $t$ for every $i,j$, hence so is $d(\theta_t)_0$, so $V=0$ by the lemma, so $T$ is injective. QED. Proof of lemma: Assume $U$ is convex. Isometries take line segments to line segments, so $V$ is also convex. For $q \in U$, let $\gamma(t) = p + t(q-p)$. Then $\phi \circ \gamma$ and $\psi \circ \gamma$ are both line segments from $r := \phi(p) = \psi(p)$ to $\phi(q)$ and to $\psi(q)$ respectively; that is, $\phi \circ \gamma(t) = r + t(\phi(q)-r)$ and $\psi \circ \gamma(t) = r + t(\psi(q) - r)$. Since $d\phi_p = d\psi_p$, we get $$\phi(q) - r = \frac d{dt}\bigg|_{t=0} (\phi \circ \gamma)(t) = d\phi_p(\dot\gamma(0)) = d\psi_p(\dot\gamma(0)) = \frac{d}{dt}\bigg|_{t=0} (\psi \circ \gamma)(t) = \psi(q)-r $$ so $\phi(q) = \psi(q)$. If $U$ is not convex, $U$ is open, hence a union of convex open sets, on each of which $\phi = \psi$. QED.
Probability of two people meeting up at a specific time period
Draw a square in the Cartesian coordinate plane with vertices at $(0,0), (1,0), (1,1), (0,1)$. The horizontal value represents the random time when Bob arrives, in hours after 1 PM. The vertical value represents the random time when Jane arrives, also in hours after 1 PM. So the point $(0.5, 0.25)$ represents Bob arriving at 1:30 PM and Jane at 1:15 PM. Now, what is the set of points in this square such that Bob and Jane meet? Clearly, they meet if and only if $|X - Y| \le 0.25$, where $X$ and $Y$ are the arrival times of Bob and Jane, respectively. What does the region defined by this inequality look like? What is its area? What is its area as a fraction of the total area of the square? Now, if you perform this geometric exercise, it might give you insight as to where your computation went wrong. If you were to perform it correctly, the integral would look like this, where I am using units of hours rather than minutes: $$\int_{x=0}^{1/4} \Pr[0 \le Y \le x + 1/4] \, dx + \int_{x=1/4}^{3/4} \Pr[x - 1/4 \le Y \le x + 1/4] \, dx \\+ \int_{x=3/4}^1 \Pr[x - 1/4 \le Y \le 1] \, dx.$$ But of course, it is easier to exploit symmetry and compute the complementary probability via the single integral $$1 - 2 \int_{x=0}^{3/4} \Pr[x + 1/4 < Y \le 1] \, dx.$$ In essence, your calculation only considers the case where Bob arrives before Jane, thus the condition in your integrand $Y \in [x, x + 15]$. Jane could arrive before Bob and still meet, so $Y \in [x - 15, x + 15]$ is appropriate whenever $x \in [15, 45]$. So this is why your probability evaluates to $7/32$: you are missing exactly half of the total probability, the missing component being the outcomes in which Jane arrives before Bob.
$\sum_{n=1}^\infty$ $\frac{1}{\sqrt{n}} \tan(\frac{1}{n})$
For big enough $n$, $$\frac1{\sqrt n}\tan \left(\frac1n\right)\le\frac1{\sqrt n}\cdot\frac2n$$ Answer to OP's comment: Since $$\lim_{x\to0}\frac{\tan x}x=1$$ there is some $n_0\in\Bbb N$ such that for $n\ge n_0$ $$\frac{\tan\left(\cfrac1n\right)}{\cfrac 1n}<2$$ and hence, $$\tan\left(\cfrac1n\right)<\frac2n$$
Second degree Diophantine equations
About algorithm: There is an algorithm that will determine, given any quadratic $Q(x_1,\dots,x_n)$ as input, whether or not the Diophantine equation $Q(x_1,\dots,x_n)=0$ has a solution. This is something that I (and others) observed quite a long time ago. I have no knowledge about a nice algorithm. Set one machine $M_1$ to search systematically for solutions. Another machine $M_2$ simultaneously checks whether there is a real solution (easy) and then checks systematically for every modulus $m$ whether there is a solution modulo $m$. By the Hasse Principle (which in this case is a theorem), if our equation has "local" solutions (real and modulo $m$ for every $m$) then it has an integer solution. So either $M_1$ will bump into a solution or $M_2$ will find a local obstruction to a solution. Thus the algorithm terminates. The corresponding question for cubics is unsolved. The same question for quartics (in arbitrarily many variables) is equivalent to the general problem of testing a Diophantine equation for solvability, so is recursively unsolvable. Added: I think that the details are written out in the book Logical Number Theory I by Craig Smorynski. Very nice book, by the way.
Lyapunov's stability theorem proof
Yes, you can deduce $3)$ from $2)$. However, you can give a much simpler proof of $3)$: Note that at a point $y$ where $\dot V(y)>0$, the function $V$ grows in an neighborhood of that point along trajctories (the notation $V'(y)$ is bad since we are not taking the derivative of $V$). Since $V(y_0)=0$ (surely you have this condition), no trajectory can converge to $y_0$. With the same proof you can show that $y_0$ is unstable provided that there is a sequence $y_n\to y_0$ with $\dot V(y_n)>0$ for each $n$. No need to require that $\dot V(y)>0$ in a whole neighborhood outside $y_0$.
Irreducible polynomial over field
We know that $f(x) = x^5+ 4x^4 + 10x^2 - 6x + 2$ is irreducible over $\mathbb{Z}$ by Eisentein's, hence over $\mathbb{Q}$. Let $\alpha$ be a root of $f(x)$ in $\mathbb{C}$. Then $\mathbb{Q}(\alpha)$ is a degree $5$ field extension of $\mathbb{Q}$. Now, consider the field $\mathbb{Q}(\alpha)(7^{1/7}) = \mathbb{Q}(7^{1/7})(\alpha)$. We know that $\mathbb{Q}(7^{1/7})$ is a degree $7$ field extension of $\mathbb{Q}$ (the polynomial $x^7 - 7$ is irreducible over $\mathbb{Z}$ by Eisenstein's, hence also irreducible over $\mathbb{Q}$). Thus, $7|[\mathbb{Q}(7^{1/7})(\alpha):\mathbb{Q}]$. And, we also see that $5|[\mathbb{Q}(7^{1/7})(\alpha):\mathbb{Q}] = [\mathbb{Q}(\alpha)(7^{1/7}):\mathbb{Q}]$. Can you deduce from this that $[\mathbb{Q}(7^{1/7})(\alpha):\mathbb{Q}] = 35$? Then, you are basically done because $\mathbb{Q}(7^{1/7})(\alpha)$ is a degree $5$ field extension of $\mathbb{Q}(7^{1/7})$, and so $f(x)$ must be the minimal polynomial of $\alpha$ over $\mathbb{Q}(7^{1/7})$.
Parametrizing the surface $z=\log(x^2+y^2)$
It looks like polar would be good for this, i.e. $x=u\cos v,\ y=u \sin v$ where $1\le u \le 5$ and $0 \le v \le 2\pi.$ [The last should technically have $v<2\pi$ but that doesn't matter in integration.] Then $z$ from your formula. $z=\log u^2$ Oops it's $1 \le u \le \sqrt{5}.$
Interchanging limit of sequence of functions
No. Let $\Omega = (0,1) \subseteq \mathbb{R}$. Let $u \equiv 1$. Let $u_n = 1$ on $(\frac{1}{n},1)$ and $u_n = 0$ on $(0,\frac{1}{n})$. Then $u_n \to u$ pointwise everywhere, $\min_{x \in \Omega} u_n(x) = 0$ for each $n$, and $\min_{x \in \Omega} u(x) = 1$.
Show that $g_*X = X$ for a symplectomorphism $g$ (Lectures on Symplectic Geometry Exercise)
Note $g^*\alpha=\alpha$ and $\omega = -d\alpha$ implies $g^* \omega = \omega$. Then for any $Y$, $$\begin{split} \omega (g_* X,Y) &= (g^* \omega) (X, g^{-1}_* Y) \\ &= \omega(X, g^{-1}_* Y) \\ &= \iota_X \omega (g^{-1}_* Y) \\ &=-\alpha (g^{-1}_* Y) \\ &= -((g^{-1})^* \alpha)(Y) \\ &=-\alpha(Y) \\ &= \iota_X \omega (Y) \\ &= \omega (X, Y). \end{split}$$ Since $Y$ is arbitrary and $\omega$ is nondegenerate, $X = g_*X$.
How to solve a PDE with time dependent domain?
Some remarks. Not a full answer. First, as I said in the comments the condition at $W(0,0)$ is not clear, as it could be $W(0,0)=1$ or $W(0,0)=0$ because for $v=0$ we have $\theta=0$ as well. Second, it makes sense to reduce the number of parameters and simplify the conditions by setting: $$2( \alpha-\beta)=a$$ $$v=\frac{1}{\epsilon}t,\qquad t \in (0,1)$$ $$\theta=ah(t)$$ $$h(t)=3t-2t^2$$ $$z=ay,\qquad z \in [h(t),\infty)$$ $$h(0)=0, \qquad h(1)=1$$ Then the equation becomes: $$\frac{\partial W(t,y)}{\partial t}= \frac{1}{\epsilon a^2} \frac{\partial^{2} W(t,y)}{\partial y^{2}}$$ $$W(0,y)=e^{-ay}$$ $$W(t,3t-2t^2)=0$$ Here the contradition between the initial and "boundary" conditions becomes even more apparent. As for the latter condition, it just states that for: $$y(t)=3t-2t^2 \Rightarrow W(t,y)=0$$ The domain of the equaion looks like this: While the above doesn't tell anything about the method of solution, I think it simplifies the problem statement. As for the question of the OP "Is it possible to solve it?" I asnwer "Yes" and reference a 1997 paper (in Russian, unfortunately), which is available here which introduces a general method of solving exactly this kind of problems.
Is dimension the only invariant of a vector space
It's the only invariant because if the dimensions match you can choose a basis in each and use those equicardinal bases to construct an isomorphism.
Solving $y'' + 2xy' - y = -1$ where $y = y(x)$
We need to solve the second order linear ordinary differential equation: $$\color{red}{y''+2xy'-y=-1, \quad y=y(x)}.$$ Since this is a second-order non-homogenous ode I must obtain the solution to the homogenous equation first: $$y''+2xy'−y=0.$$ Yes, that's correct. Now, for solve $$\color{red}{y''+2xy'-y=0}$$ we can use the Yuval's hint, for that note that we don't have singular points, so we can find solutions in power series centered in $0$, convergent to $|x|<\infty$. Let, $$\color{blue}{(\operatorname{V.C}):\begin{cases} \displaystyle y=\sum_{n=0}^{+\infty} a_{n}x^{n}\\ \displaystyle y'=\sum_{n=1}^{+\infty} a_{n}\cdot n x^{n-1}\\ \displaystyle y''=\sum_{n=2}^{+\infty}a_{n}\cdot n \cdot (n-1) x^{n-1} .\end{cases}}$$ Then, we can re-write $$y''+2xy'-y=0, $$as $$\left( \sum_{n=2}^{+\infty}a_{n}\cdot n \cdot (n-1) x^{n-1} \right)+2x\left( \sum_{n=1}^{+\infty} a_{n}\cdot n x^{n-1} \right)-\left( \sum_{n=1}^{+\infty} a_{n}\cdot n x^{n-1} \right)=0.$$ Now, you must find the recurrence relation of $c_{k}$ such that you can choose from a certain subset of the set of nonzero coefficients. Then, you can find $y(x)$. Finally, you need to solve $$y''+2xy'-y=-1$$ for that you need find the particular solution, for that you can use many methods: indeterminate coefficients, the null method or the parameter variation method. On the other hand, it seems to me that the solution to the problem cannot be found in a closed form.
Double cross product in 2D
We can use the back-cab rule as following$$\mathbf{F}_{centrifugal} = -m \boldsymbol{\omega} \times [\boldsymbol{\omega} \times \mathbf{r}]=-m((\boldsymbol{\omega}\cdot\boldsymbol{r})\boldsymbol{\omega}-|\boldsymbol{\omega}|^2\boldsymbol{r})=m|\boldsymbol{\omega}|^2r-m(\boldsymbol{r}\cdot \boldsymbol{\omega})\boldsymbol{\omega}$$
A problem with defining a norm to get a contraction
The desired equality will be in general false. True if the three differences are mutually orthogonal. See Pythagorean Theorem in Hilbert space. The usual procedure in this situation is using the triangule inequality.
Can someone give an example of an ideal $I \subset R= \Bbb{Z}[x_1,...x_n]$ with $R /I \cong \Bbb{Q}$?
Proposition If $I\subset R=\mathbb Z[X_1,\cdots X_n]$ is a maximal ideal, then the quotient $R/I$ is a finite field. Proof The ring $R$ is Jacobson : this means that each of its prime ideals is the intersection of the maximal ideals which contain it. The canonical ring morphism $f:\mathbb Z\to R$ is a morphism between Jacobson rings and thus has the wonderful property that the inverse image of a maximal ideal is maximal. This implies in our case that $f^{-1}(I)\subset \mathbb Z$ is a maximal ideal, necessarily of the form $p\mathbb Z$ for some prime integer $p$, so that $\mathbb Z/f^{-1}(I)=\mathbb Z/p\mathbb Z=:\mathbb F_p$. But then the extension field $\mathbb F_p \subset R/I$ is finitely generated as an algebra over $\mathbb F_p$, and is thus a finite-dimensional vector space over $\mathbb F_p$ by Zariski's version of the Nullstellensatz. Thus we conclude that $R/I$ is a finite field $\mathbb F_{p^n}$ Bibliography The best reference on Jacobson rings is the very last section of Chapter 5 of Bourbaki's Commutative Algebra. Zariski's Theorem is Proposition 7.9 in Atiyah-Macdonald. Wikipedia has the cheek to call it Zariski's lemma :-)
Can two singular matrices ever be row equivalent?
To find an example, start with a singular matrix that's already in rref form and do a row operation on it to make another singular matrix that is row equivalent to it.
Improper integral convergence problem
We have $$\arctan x=\frac{\pi}{2}-\arctan\left(\frac 1 x\right),\quad x>0$$ and $$\arctan\left(\frac 1 x\right)\sim_\infty \frac 1 x$$ so we find $$\frac{\arctan 2x -\arctan x}{x}\sim_\infty\frac{ 1}{2 x^2}$$ so we deduce the desired result
Show that multiplying two matrices of rank $n$ is not equal to the zero matrix
Let's assume $A,B$ are $n\times n$ matrices whose product $AB=0$. Considered as matrix $A$ acting on the columns of $B$, the nullspace of $A$ has to contain the column space of $B$, in order to get to a zero product. That means the nullity of $A$ (dimension of its nullspace) has to be at least the (column) rank of $B$. Specialize to the data in your problem, where $A$ and $B$ have rank 2, and where by Rank-Nullity Theorem, the nullity of $A$ is ... ?
Find a subgroup of order 6 in U($700$)
$U(n)$ is the group of units of the integers modulo $n$, that is it consists of elements $[x]$ with $\gcd(x,n)=1$, $[x]=[y]$ when $x\equiv y\pmod n$ and $[x][y]=[xy]$, right? The $U(7)$ in your decomposition consists of the elements $[x]$ of $U(700)$ with $x\equiv1\pmod{100}$, that is it consists of $[1]$, $[101]$, $[201]$, $[401]$, $[501]$ and $[601]$. Have any of these order $6$ in $U(700)$?
Simple inequality over positive reals: $2(x+y+z) \geq 3xyz + xy+yz+zx$ for $xyz=1$
with your Substitution we get $$a(a-b)(b-c)+b(a-c)(b-c)+c(a-c)(a-b)\geq 0$$ this is true for $$a\geq b\geq c$$
Show that $\sum_{k=0}^\infty \frac{1}{k!}=e$ using $e=\lim_{n\to\infty }(1+\frac{1}{n})^n$
Hint: It may be seen as a consequence of the dominated convergence theorem, since for every fixed $k$: $$ \lim_{n\to +\infty}\frac{n!}{(n-k)!n^k} = \lim_{n\to +\infty}\left(1-\frac{1}{n}\right)\cdot\ldots\cdot\left(1-\frac{k-1}{n}\right) = 1.$$
How can you tell how many invariant factors a matrix has?
I'm not really sure if this is what you mean, but the minimal polynomial is going to be $(x-1)(x+1)^2$. The factor $x-1$ has to be there, because you have an eigenvector for eigenvalue $1$. Now, for the eigenvalue $-1$ you have the generalised eigenspace of dimension $3$ (that's what the factor $(x+1)^3$ in the characteristic polynomial tells you), and there are two linearly independent eigenvectors (that's the geometric multiplicity). So, there are two Jordan boxes in the Jordan form of $A$, and their sum of dimensions is $3$. This can only happen if one of them is $1 \times 1$, and another is $2 \times 2$. It follows that you need the factor $(x+1)^2$ in the minimal polynomial (to kill the $2 \times 2$ box). Hence the final answer $(x-1)(x+1)^2$.
Question about this proof (exactness of a sequence)
Let me rewrite the things more explicitly: Since $\newcommand{\coker}{\operatorname{coker}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\im}{\operatorname{im}}$$0 \to \Hom(N'',\coker g) \stackrel{g^*}{\to} \Hom(N,\coker g)$ is exact, $g^*$ is injective, and then, if $\pi : N'' \to \coker g$ is the canonical projection, $g^*(\pi) = \pi \circ g = 0$ implies that $\pi = 0$. Thus $\coker g = \im \pi = 0$, meaning that $g$ is surjective. Since $\Hom(N'',N'') \stackrel{g^*}{\to} \Hom(N,N'') \stackrel{f^*}{\to} \Hom(N',N'')$ is exact, $(g \circ f)^* = f^* \circ g^* = 0$, and in particular $0 = (g \circ f)^*(\operatorname{id}_{N''}) = \operatorname{id}_{N''} \circ\, (g \circ f) = g \circ f$, which implies that $\im f \subseteq \ker g$, and then $g$ induces a homomorphism $h : \coker f \to N''$ such that $h(n+\im f) = g(n)$ for every $n \in N$. Now, $\Hom(N'',\coker f) \stackrel{g^*}{\to} \Hom(N,\coker f) \stackrel{f^*}{\to} \Hom(N',\coker f)$ is exact, so, if $\pi : N \to \coker f$ is the canonical projection, $f^*(\pi) = \pi \circ f = 0$ implies that $\pi \in \ker f^* = \im g^*$, meaning that $\pi = g^*(j) = j \circ g$ for some homomorphism $j : N'' \to \coker f$. In other words, $j(g(n)) = n+\im f$ for all $n \in N$, that is, $j$ is the inverse of $h$.
If a NP-complete problem is not in P, then all NP-complete problems are not in P?
If we assume that there is $NP$-Complete problem lets say $p_1$ that's solvable in polynomial time, and another $NP$-Complete problem $p_2$ that's not solvable in polynomial time then we can reduce $p_2$ to $p_1$ in polynomial time and solve it in polynomial time, which is contradiction to the fact that $p_2$ is not in $P$. Or looking at it in another way if there's $NP$-Complete problem in $P$, then $P=NP$, so it would be impossible for any $NP$ problem to be unsolvable in polynomial time. So this implies that if we could prove that there's an $NP$-Complete problem that's not in $P$, then all $NP$-complete problems are not in $P$.
Can any total ordering on $\Bbb R$ be mapped to the standard order?
The natural total order $(\Bbb R,\le)$ has the property that every family of disjoint, non-sigleton and non-empty intervals is countable. For total orders this property is hereditary, which means that the order induced on any subset must have it too. The lexicographic order on $\Bbb R^2$ does not have this property, therefore it cannot be realised as a suborder of $(\Bbb R,\le)$. However, the lexicographic order is isomorphic to some order $(\Bbb R,\preceq)$, by using a bijection $f:\Bbb R\to\Bbb R^2$.
Are there any two-dimensional quadrature that only uses the values at the vertices of triangles?
One-dimensional Let's go back to the basics. Compute the integral at the interval $[-1/2,+1/2]$ of a function $f(\xi)$ numerically, by employing a 2-point Gauss-Legendre integration: $$ \int_{-1/2}^{+1/2} f(\xi)\,d\xi = w_1.f(\xi_1)+w_2.f(\xi_2) $$ Determine weights $(w_1,w_2)$ and locations $(\xi_1,\xi_2)$ of the so-called integration points by the requirement that a polynomial of degree $3$ is integrated exactly. Four equations can be set up, as follows: $$ 1 = \int_{-1/2}^{+1/2} 1\,d\xi = w_1.1 + w_2.1 \quad \mbox{(1)} \\ 0 = \int_{-1/2}^{+1/2} \xi\,d\xi = w_1.\xi_1 + w_2.\xi_2 \quad \mbox{(2)}\\ \frac{1}{12} = \int_{-1/2}^{+1/2} \xi^2\,d\xi = w_1.\xi_1^2 + w_2.\xi_2^2 \quad \mbox{(3)}\\ 0 = \int_{-1/2}^{+1/2} \xi^3\,d\xi = w_1.\xi_1^3 + w_2.\xi_2^3 \quad \mbox{(4)} $$ These four independent equations are sufficient to determine two weights $(w_1,w_2)$ and two locations $(\xi_1,\xi_2)$ . Now the two equations (2),(4) are easily satisfied by adopting the following equalities: $w_1 = w_2$ , $\xi_1 = - \xi_2$ . Substitution into equation (1) gives : $w_1 = w_2 = 1/2$ . Substitute this into the remaining equation (3), giving: $1/12 = \xi_2^2$ . Hence $\xi_2 = \sqrt{3}/6$ . Summarizing: $$ (w_1,w_2) = (\frac{1}{2},\frac{1}{2}) \quad \mbox{and} \quad (\xi_1,\xi_2) = (-\frac{\sqrt{3}}{6},+\frac{\sqrt{3}}{6}) $$ In Finite Element culture, sometimes reduced integration is employed, meaning in our case that $f(\xi)$ shall be integrated as if it were only a polynomial of degree $1$, which is linear. In this case we have only the equations (1) and (2) that must be satisfied: $1 = w_1 + w_2$ and $0 = w_1.\xi_1 + w_2.\xi_2$ . There are a myriad solutions for this to accomplish. Let's assume for reasons of symmetry that $w_1 = w_2 = 1/2$ . Then two extreme cases are $\xi_1=\xi_2=0$ and $(\xi_1,\xi_2) = (-1/2,+1/2)$ . The first case is also covered by a one-point Gaussian quadrature, which is the better way to do it: $$ 1 = \int_{-1/2}^{+1/2} 1\,d\xi = w_1 \quad ; \quad 0 = \int_{-1/2}^{+1/2} \xi\,d\xi = w_1\xi_1 \quad \Longrightarrow \quad (w_1,\xi_1) = (1,0) $$ The second case is integration at the nodal points, according to the OP's wishful thinking. Let's work out the above for a specific problem that has been solved recently at MSE: Understanding Galerkin method of weighted residuals In this answer we find the following expressions for the finite element matrix (look it up): $$ E_{0,0}^{(i)} = E_{1,1}^{(i)} = 1/(x_{i+1}-x_i)+(x_{i+1}-x_i)\,p^2/3 \\ E_{0,1}^{(i)} = E_{1,0}^{(i)} = -1/(x_{i+1}-x_i)+(x_{i+1}-x_i)\,p^2/6 $$ If we hadn't worked out the integrals, then this would have been: $$ E_{0,0}^{(i)} = E_{1,1}^{(i)} = 1/(x_{i+1}-x_i)+(x_{i+1}-x_i)\,p^2 \int_{-1/2}^{+1/2} \left(\frac{1}{2}\pm\xi\right)^2 d\xi \\ E_{0,1}^{(i)} = E_{1,0}^{(i)} = -1/(x_{i+1}-x_i)+(x_{i+1}-x_i)\,p^2 \int_{-1/2}^{+1/2} \left(\frac{1}{4}-\xi^2\right) d\xi $$ The integrals can be worked out with integration points as well. First take the common Gauss points: $$ \int_{-1/2}^{+1/2} \left(\frac{1}{2}\pm\xi\right)^2 d\xi = \frac{1}{2}\left(\frac{1}{2}-\sqrt{3}/6\right)^2+\frac{1}{2}\left(\frac{1}{2}+\sqrt{3}/6\right)^2 = \frac{1}{4}+\frac{1}{12} = \frac{1}{3} \\ \int_{-1/2}^{+1/2} \left(\frac{1}{4}-\xi^2\right) d\xi = 2\times \frac{1}{2}\left[\frac{1}{4}-\left(\pm\sqrt{3}/6\right)^2\right] = \frac{1}{4} - \frac{1}{12} = \frac{1}{6} $$ Exactly as it should, of course. Now we do reduced integration for the two extreme cases. First case is center of element $\xi = 0$ : $$ \int_{-1/2}^{+1/2} \left(\frac{1}{2}\pm \xi\right)^2 d\xi = 1.\left(\frac{1}{2}\pm 0\right)^2 = \frac{1}{4} \\ \int_{-1/2}^{+1/2} \left(\frac{1}{4}-\xi^2\right) d\xi = \frac{1}{4} - 0^2 = \frac{1}{4} $$ Last but not least the case that is wanted by the OP, reduced integration at the nodal points of the element $(\xi_1,\xi_2) = (-1/2,+1/2)$ : $$ \int_{-1/2}^{+1/2} \left(\frac{1}{2}\pm \xi\right)^2 d\xi = \frac{1}{2}\left(\frac{1}{2}-\frac{1}{2}\right)^2 + \frac{1}{2}\left(\frac{1}{2}+\frac{1}{2}\right)^2 = \frac{1}{2} \\ \int_{-1/2}^{+1/2} \left(\frac{1}{4}-\xi^2\right) d\xi = 2\times \frac{1}{2}\left[\frac{1}{4}-\left(\pm\frac{1}{2}\right)^2\right] = 0 $$ Thus we now have for the finite element matrix at hand: $$ E_{0,0}^{(i)} = E_{1,1}^{(i)} = 1/(x_{i+1}-x_i)+(x_{i+1}-x_i)\,p^2 \times \begin{cases} 1/3 \\ 1/4 \\ 1/2 \end{cases} \\ E_{0,1}^{(i)} = E_{1,0}^{(i)} = -1/(x_{i+1}-x_i)+(x_{i+1}-x_i)\,p^2 \times \begin{cases} 1/6 \\ 1/4 \\ 0 \end{cases} $$ Question is: which one is the best? All these integration schemes seem to be equally good at first sight. At first sight, because there is another issue to be covered : stability / robustness . The following book is downloadable from the internet. It's about Finite Volume methods, but it contains important guidelines for the Finite Element method as well. Suhas V. Patankar, Numerical Heat Transfer and Fluid Flow. According to one of Patankar's "Four Basic Rules" (at page 37), "the rule of positive coefficients", off-diagonal terms $E_{i,j}$ with $i \ne j$ must be less than zero. With our 1-D problem this means a restriction on the mesh size for common and one-point integration (same colors are employed in the picture below): $$ E_{0,1}^{(i)} = E_{1,0}^{(i)} = -1/(x_{i+1}-x_i)+(x_{i+1}-x_i)\,p^2 \times C < 0 \\ \Longleftrightarrow \quad C\,\left[\,p\,(x_{i+1}-x_i)\,\right]^2 < 1 \quad \mbox{with} \quad C = \begin{cases} \color{blue}{1/6} \\ \color{red}{1/4} \\ \color{green}{0} \end{cases} $$ The above condition becomes increasingly difficult to fulfill when $\,p\,$ is large. The only exception being $\,C=\color{green}{0}$ , which happens to be : integration (points) at the element vertices ! This is seen in the following picture for our constant $p=100$ and number of mesh points $N=20$. Mind the different colors: Thus we may conclude that the case with $\,\color{red}{C = 1/4}\,$ (i.e. centroid integration) is worst and the case with $\color{green}{C = 0}$ (i.e vertex integration) is best. Especially in higher dimensional problems, the coefficients of equations may vary wildly from place to place. So robustness (i.e. unconditional stability) is not a luxury, certainly not in 2-D and 3-D. Two-dimensional But there is more. Alternative integration schemes in FEM are sort of a stargate for incorporating knowledge from other numerical methods, such as the Finite Volume Method. As is exemplified here for example: What is the difference between Finite Difference Methods, Finite Element Methods and Finite Volume Methods for solving PDEs? In that answer it has not been mentioned that integration points at a quadrilateral finite element are actually taken at the nodal points, i.e. the vertices, as has been suggested in the OP's question: Resulting in a splitting of the quadrilateral in four linear triangles: Finally, these triangles comprise an equivalent finite volume method for the diffusion problem in a finite element context. A much more complete account of this theory is found elsewhere, being too large to fit into the margins of MSE : 2-D Elementary Substructures . Three-dimensional The higher the dimension, the more difficult it is to formulate a concise answer. But it is the same story all over the place: integration points at the vertices (of a brick in 3-D) are the best option to guarantee unconditional stability of finite element schemes. Where it should be emphasized that numerical schemes don't have to be the same for different terms of the partial differential equation. The result of the research below has been the publication of a paper by Horst Fichtner, this author and others. The title of the paper is Longitudinal gradients of the distribution of anomalous cosmic rays in the outer heliosphere. Some (numerical and graphical) details are found in the following references: Presentation at Woudschoten 1996 On solving a Cosmic Ray equation Resistor Models for Diffusion in 3-D Recipe for Convection & Diffusion in 3-D CR pressure with AVS
How to find the smallest set of generating elements in a group?
A following method can be used to find the smallest set of the generating elements in a finite group: It is based on the following theorem: Suppose $G$ is a finite group, $\{X_i\}_{i = 1}^{n}$ are i.i.d uniformly distributed random elements of $G$. Then $P(\langle \{X_i\}_{i = 1}^{n} \rangle = G) = \sum_{H \leq G} \mu(G, H) {\left(\frac{|H|}{|G|}\right)}^n$, where $\mu$ is the Moebius function for subgroup lattice of $G$. Thus, the smallest possible cardinality of a generating set can be described by the following formula: $$\min\{n \in \mathbb{N}| \sum_{H \leq G} \mu(G, H) {\left(\frac{|H|}{|G|}\right)}^n > 0\}$$ And if we know the smallest possible cardinality of a generating set (let's denote it by $s$), then we can find an example by checking all of the $C_{|G|}^s$ subsets on whether they lie in one of the maximal proper subgroups or not. If they lie - they are the smallest possible generating set.
Any square matrix is equivalent to zero diagonal matrix
Yes. Note that two square matrices are equivalent if and only if they have the same rank. Let $E_{i,j} = e_i e_j^T$ denote the matrix with zeros except in the $i,j$ entry, where it has a $1$. We can define $M_0 = 0$, $$ M_{k} = \sum_{j=1}^k E_{i,i+1} $$ for $k = 1,\dots,n-1$, and $$ M_n = M_{n-1} + E_{n,1} $$ We note that rank$(M_k) = k$, so that we have selected one element from each class of equivalent matrices. The conclusion follows.
Which of the statements are true (CSIR)?
Notice that for $x\neq0$ we have $f(x)=f(\frac{\lvert\lvert x\rvert\rvert}{\beta}\frac{\beta x}{\lvert\lvert x\rvert\rvert})=f(\frac{\beta x}{\lvert\lvert x\rvert\rvert})\beta^{-\alpha}\lvert\lvert x\rvert\rvert^{\alpha}=C\lvert\lvert x\rvert\rvert^{\alpha}$ where $C$ is a constant independent of choice of $x$ since the term on the inside is on the $\beta$ ball about $0$ and by assumption $f(x)=f(y)$ when $\lvert\lvert x\rvert\rvert=\lvert\lvert y\rvert\rvert=\beta$. Thus, 3 is true (notice that if $f(0)=a$ then $a=f(0)=f(r\times0)=r^{\alpha}a$ so either $a=0$ or $r=1$ but this equation is true with arbitrary $r$. Hence, $f(0)=0$ and agrees with $C\lvert\lvert x\rvert\rvert^{\alpha}$ for all $x$.). We can't necessarily guarantee 1 without more information. For a counterexample to 1 consider $f(x)=-\lvert\lvert x\rvert\rvert^{\alpha}$. A counterexample to 4 is $f(x)=\lvert\lvert x\rvert\rvert^{\alpha}$. So we can't guarantee 4 either. I'm not sure how to answer 2 since I'm not entirely certain how $\beta$ is involved.
showing a function has exactly one root using Numerical Analysis Methods
Let $f(x)=e^{\frac{x}{2}}-25x^2 \,.$ Then $f(0)=1, f(-1)<0$ and $f$ is continuous. IVT tells us there is at least a root. Now, $f'(x)>0$ on $(0,\infty)$ which means that the function is strictly increasing. Thus it cannot have more than one root...
Number of solutions of quadratic congruence
If $n$ is small enough, you could probably list all the residues of $n$ squared, modulo $n$, and see which of these is equal to $a$. In the general case, you will likely need to use the prime factorization $n = p_1^{a_1} \dots p_r^{a_r}$, and then use the Chinese Remainder Theorem along with a way of solving $x^2 \equiv a \pmod{p_i^{a_i}}$ for all $1 \leq i \leq r$. To solve $x^2 \equiv a \pmod{p^k}$, this article goes through the procedure in detail, so I won't repeat it here. As for how many solutions the congruence has, by the Chinese Remainder Theorem, it would be the product over the number of solutions each of $x^2 \equiv a \pmod{p_i^{a_i}}$ has. Also relevant is the answer to this question. Hope this helps.
Singular value decomposition works only for certain orthonormal eigenvectors, not all?
You need to match the left singular vectors to the right ones, or vice versa. E.g. after you have computed $e_1'$ and $e_2'$, you could get the two corresponding left singular vectors as $e_1=Ae_1'/\|Ae_1'\|=\frac1{\sqrt{5}}(1,0,2)^T$ and $e_2=Ae_2'/\|Ae_2'\|=(0,\color{red}{-1},0)^T$ (note: the sign of $e_2$ here is different from yours). The remaining left singular vector can be any unit vector orthogonal to the previous two singular vectors (in this case, it must be $\pm e_1\times e_2$, where the sign is unimportant). Then \begin{align*} A&= \pmatrix{e_1&e_2&\pm e_1\times e_2} \pmatrix{\frac{\|Ae_1'\|}{\|e_1'\|}&0\\ 0&\frac{\|Ae_2'\|}{\|e_2'\|}\\ 0&0} \pmatrix{\frac{(e_1')^T}{\|e_1'\|}\\ \frac{(e_2')^T}{\|e_2'\|}}\\ &=\pmatrix{\frac{1}{\sqrt{5}}&0&\frac{\pm2}{\sqrt{5}}\\ 0&-1&0\\ \frac{2}{\sqrt{5}}&0&\frac{\mp1}{\sqrt{5}}} \pmatrix{\sqrt{10}&0\\ 0&\sqrt{8}\\ 0&0} \pmatrix{\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\ \frac{-1}{\sqrt{2}}&\frac{1}{\sqrt{2}}}. \end{align*}
$\frac{1}{(1-x)^{2}}=\sum_{k = 0}^{n}(k + 1)x^k+o(x^{n}).$
As you requested: $$\frac{1}{1-x}\underset{x\to 0}{=}\sum_{k=0}^{n+1}x^{k}+o(x^{n+1})$$ Differentiating with respect to $x$: $$\begin{aligned} \frac{\text{d}}{\text{d}x}\left(\frac{1}{1-x}\right) &\underset{x\to 0}{=}\frac{\text{d}}{\text{d}x}\left(\sum_{k=0}^{n+1}x^{k}+o(x^{n+1})\right)\\ &\underset{x\to 0}{=}\sum_{k=0}^{n+1}\frac{\text{d}}{\text{d}x}\left(x^{k}\right)+\frac{\text{d}}{\text{d}x}\left(o(x^{n+1})\right) \end{aligned}$$ Here is the line you have to pay attention to. The sum goes from $k=0$ to $k=n+1$, so that you have: $$\begin{aligned} \frac{\text{d}}{\text{d}x}\left(\frac{1}{1-x}\right) &\underset{x\to 0}{=}\sum_{k=0}^{n+1}\frac{\text{d}}{\text{d}x}\left(x^{k}\right)+\frac{\text{d}}{\text{d}x}\left(o(x^{n+1})\right)\\ &\underset{x\to 0}{=}\sum_{k=1}^{n+1}kx^{k-1}+o((n+1)x^{n})\\ \end{aligned}$$ where you can see that this sum now goes from $k=1$ (because the derivative of a constant is zero) to $k=n+1$ (because there is no reason that the term $\frac{\text{d}}{\text{d}x}\left(x^{n+1}\right)$ disappears). Now, by replacing $k-1$ by $l$, the sum goes from $l=0$ to $l=n+1-1=n$ and it gives what you expected: $$\frac{\text{d}}{\text{d}x}\left(\frac{1}{1-x}\right)=\frac{1}{(1-x)^{2}}\underset{x\to 0}{=}\sum_{l=0}^{n}(l+1)x^{l}+o(x^{n})$$
How can I prove by induction that $9^k - 5^k$ is divisible by 4?
${\color{White}{\text{Proof without words.}}}$
converging subsequence on a circle
Consider the continued fraction for $2\pi$. Let $\frac{h_n}{k_n}$ be the $n^{th}$ convergent. Then the numbers $h_n$ will give you a convergent subsequence which converges to $(1,0)$. Since $$|2\pi -\frac{h_n}{k_n}|\leq \frac{1}{k_n k_{n+1}},$$ (see this theorem) it follows that $$|2k_n \pi -h_n|\leq \frac{1}{k_n+1}.$$ Since $k_{n+1}\rightarrow \infty$, and both $\sin$ and $\cos$ are continuous, it follows that $$\lim_{n\rightarrow \infty} (\cos (h_n), \sin (h_n))=(1,0).$$ Hope that helps,
1/n + 1/(n+2) and Prime Pythagorean Triples
We have $\frac{1}{n}+\frac{1}{n+2}=\frac{2n+2}{n(n+2)}$. Note that actually you do not need to have a reduced fraction. If the numerator and the denominator have a common factor just cancel it when you want. We can see that $a^2+b^2=(2n+2)^2+(n(n+2))^2=n^4+4n^3+8n^2+8n+4=(n^2+2n+2)^2=c^2$ This proves your conjecture.
Find a tangent line through $y=1-x^2$ such that the triangle it forms has minimum area.
Let a general tangency point on the curve be $\;(a, 1-a^2)\;$ , so the tangent line to the curve at this point is $$\begin{cases}\text{Slope}\;\;f'(a)=-2a\\{}\\ \text{Tangent line}\;\;y-(1-a^2)=-2a(x-a)\end{cases}\;\iff y=-2ax+a^2+1$$ The above line intersects the axis at $$A(0\,,\,a^2+1)\;,\;\;B\left(\frac{a^2+1}{2a}\,,\,0\right)$$ Thus, the area of the triangle $\;AOB\;$ is $$A(a)=\frac{(a^2+1)^2}{4a}=\frac{a^3}4+\frac a2+\frac1{4a}\implies$$ $$A'(a)=\frac{3a^2}4+\frac12-\frac1{4a^2}=0\iff3a^4+2a^2-1=0\iff$$ $$a^2_{1,2}=\frac{-2\pm\sqrt{4+12}}{6}=\frac{-2\pm4}6\implies a^2=\frac13\implies a=\pm\frac1{\sqrt3}$$
Find $(m+n)th$ term in given Arithmetic Progression
In an arithmetic progression $a,a+d,a+2d,...$, the $k$th term is $a+(k-1)d$. Hence $n(a+(m-1)d)=m(a+(n-1)d)$ so that $(n-m)(a-d)=0$. If we assume $n\neq m$, then we have $a=d$. This means that the arithmetic progression is $a,2a,3a,...$ so that the $(m+n)$th term is $(m+n)a$.
Why does convergence to zero in two norms imply equivalence?
Assume there is no $\alpha$ such that $\Vert x\Vert_1\le \alpha\Vert x\Vert_2$ for all $x$. So for every $n\in\Bbb N$ there is some $x_n$ such that $\Vert x_n\Vert_1>n\Vert x_n\Vert_2$. Let $$z_n=\frac{x_n}{\sqrt n \Vert x_n\Vert_2}$$ so $\Vert z_n\Vert_2=\frac1{\sqrt n}\xrightarrow{n\to\infty}0$ wheras $\Vert z_n\Vert_1\ge \sqrt n\xrightarrow{n\to\infty}+\infty$.
Using Stirling's formula, show that limit is equal to zero
Hint: First note that $h\to \infty$ and $i-h\to \infty$ as $i\to \infty$. Apply the Stirling approximation for the binomial $$ \binom{i}{h} =\frac{i!}{h! (i-h)!}\approx \sqrt \frac{i}{2 \pi h (h-i)} \left(\frac{i}{i-h}\right)^i \left(\frac{i-h}{h}\right)^h $$ And write $h=(i+1)\lambda-\epsilon$ with $0\le \epsilon <1$, replace and evaluate the limit.
Show that any field K has a subfield isomorphic to either $\mathbb{Q}$ of $\mathbb{Z}_p$
The prime subfield will be the subfield obtained by taking the additive subgroup generated by $1$ and then throwing in multiplicative inverses. If $K$ has characteristic $0$, then the additive subgroup generated by $1$ will be isormorphic to $\mathbb{Z}$ and so adding inverses gives that the prime subfield is isomorphic to $\mathbb{Q}$. If $K$ has characteristic $p$, then the additive subgroup generated by $1$ is isormophic to $\mathbb{Z}_p$ as an additive group. The non-zero elements of $\mathbb{Z}_p$ have multiplicative inverses, so the prime subfield is isomorphic to $\mathbb{Z}_p$.
How to compute or give a limsup of this integral?
Here is an evaluation of the integral. Let $z=\frac{2n}{3}x$ \begin{equation} \int\limits_{3/(2n)}^{1} n x^{n-1} \left(x - \frac{3}{2n} \right)^n dx = n\left(\frac{3}{2n} \right)^{2n} (-1)^n \int\limits_{1}^{2n/3} z^{n-1} (1-z)^n dz \end{equation} \begin{align} I(n) &= \int\limits_{1}^{2n/3} z^{n-1} (1-z)^n dz \\ &= \int\limits_{0}^{2n/3} z^{n-1} (1-z)^n dz - \int\limits_{0}^{1} z^{n-1} (1-z)^n dz \\ &= \mathrm{B}_{2n/3}(n,n+1) - \mathrm{B}(n,n+1) \\ &= \frac{1}{n} \left(\frac{2n}{3} \right)^{n} {}_{2}\mathrm{F}_{1}\left(n,-n;n+1;\frac{2n}{3} \right) - \mathrm{B}(n,n+1) \end{align} We have used the incomplete beta function and Gauss's hypergeometric function.
Concept of "eventually almost surely" as an artefact of measure-theoretic axioms?
The mistake here is to say that $\mathcal S=\left\{ w : |\{n : S_n(w) > \sqrt{2 n \log \log n}\}| = \infty \right\}$ is at most countable. As a counterexample, consider the set $\mathcal S'$ of all real numbers in $(0\,,1)$ whose binary expansion is $0.0^{n(0)}1^{n(1)}0^{n(2)}1^{n(3)}0^{n(4)}...$ , in which a string of $n(0)$ $0$s is followed by $n(1)$ $1$s, followed by $n(2)$ $0$s, and so on, where $n(k)=2^{m(k)}$ ($k=0,1,...$) and $\left(m(0),m(1),...\right)$ is a strictly increasing sequence of natural numbers. Then $\mathcal S'\subset\mathcal S$, but $|\mathcal S'|=\mathfrak c$.
$\text{Var}(Y|Z)$ where $Y,Z$ has normal distribution
It is easy to see, that $(Y,Z)$ follows the multivariate normal distribution: $$N(\begin{pmatrix}0\\ 0\end{pmatrix} , \begin{pmatrix} 10 & 10 \\ 10 & 25 \end{pmatrix})$$ Using the formula for conditioning in a multivariate normal distribution we get the conditional variance to be $Var(Y|Z) = (1-\rho^2)Var(Y)$, where $\rho$ is the correlation coefficient between $Y$ and $Z$. And since $$Var(Y) = \sum_{i=1}^{10} Var(X_i) = 10$$ and $$\rho = \frac{Cov(Y,Z)}{\sqrt{Var(Y)}\sqrt{Var(Z)}} = \frac{10}{\sqrt{10}\sqrt{25}}=\sqrt{\frac{10}{25}},$$ we get, that $$Var(Y|Z) = (1-\frac{10}{25})10 = 6.$$ (see https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Bivariate_case_2 for reference about the conditional distribution in a bivariate normal distribution)
What is the $10^5\pmod{35}$?
I wouldn't say "lengthy". You have $10^2=100\equiv30$, so $$10^5=10^210^210\equiv30\times 30\times10\equiv 30\times300\equiv30\times 20\equiv(-5)(-15)\equiv75\equiv 5. $$ Either the above, or just do long division 10000 in 35.
Equation of a plane through a point and perpendicular to 2 other planes
You made a mistake in the cross product. It should be (3,-3,3).
A statement equivalent to the definition of limits at infinity?
No, they are not equivalent. For a counterexample, take $$f(x) = \frac {1} {\sqrt x}$$ Your second definition assumes something about the rate of decay of $f$ at infinity; the usual definition assumes nothing.
What is an example of a syzygy in invariant theory or pre-abstract algebra
Here is the example of Kleinian singularities. Let $G$ be a finite subgroup of $SU(2,{\bf C})$. $G$ acts on ${\bf C}^2$ and for each $G$ corresponds a Klein singularity $$ S=\frac {{\bf C}^2 } {G} $$ $$ S=\{ (x,y,z) \in {\bf C}^3 \ | \ R(x,y,z)=0 \} $$ One example is the group ${\mathcal C}_{n}$, the cyclic group. You can realize it as matrices of the form $$ \begin{pmatrix} \omega^j & 0 \cr 0 & \omega^{-j} \end{pmatrix} $$ where $\omega $ is a primitive root of unity. The invariant functions, are functions which satisfy $f(u,v)=f(au+bv, cu+dv)$ The following three functions are invariant $u^n$, $v^n$, $uv$. Set $x=u^n$, $y=v^n$ and $z=uv$. Then they satisfy the follwoing syzygy $$z^n=xy$$
If $f(x)$ is a polynomial in $\mathbb{Z}$ and $f(a)\equiv k\pmod{n}$, prove that, for all integer $m$, $f(a+mn)\equiv k\pmod{n}$
Let $f(x)=c_0+c_1x+c_2x^2+\cdots+c_dx^d$. Note that from binomial expansion $(a+mn)^k\equiv a^k\bmod n.$ Therefore $f(a+mn)=a_0+c_1(a+mn)+c_2(a+mn)^2+\cdots+c_d(a+mn)^d$ $\equiv a_0+c_1a+c_2a^2+\cdots+c_da^d=f(a)\bmod n$.
Number of Real roots of cubic
The derivative of $\mathbb{P}(x)$ is $3x^2 + 4x + 2$ (doesn't depend on $D$!). By the quadratic formula the discriminant of this derivative is $16-24 = -8$, so the derivative has no real roots. This means that $\mathbb{P}(x)$ has no critical points (ie, no points where the derivative is 0). Thus, $\mathbb{P}(x)$ is either monotonically increasing, or monotonically decreasing. Either way, it will cross the real axis exactly once.
Proof of $A = \emptyset \Leftrightarrow f(A) = \emptyset$
No, $f(A)=\emptyset$ always implies $A=\emptyset$. The set $f(A)$ is defined as $f(A)=\{f(a) \mid a \in A\}$. Suppose that in fact $A \neq \emptyset$, so there exists at least one $a \in A$. Then $f(a) \in f(A)$, so $f(A) \neq \emptyset$. Contradiction.
Solution to $\frac{d^2 y}{dt^2}+y=\sec\left(t\right)$
The function $y(t) =t \sin(t)+\cos(t) \log(\cos(t))$ is a solution (for example). (See Taylor Martin's answer on where this is a valid solution).
Counting number of multiplications and additions in exponential function exp(.)
Yes of course: if you are using a particular algorithm to compute $\exp(x)$ for a given $x$ to within a given tolerance $\epsilon$, you can go through it step by step and explicitly count each multiplication and addition you perform. If you want a closed-form formula in terms of $x$ and $\epsilon$, that might be more difficult, depending on the algorithm.
The mathematics behind AlphaGo AI and Google Deepmind
If you are interested in starting your journey in Artificial Intelligence, I suggest to get your hands on Artificial Intelligence: A Modern Approach, by Stuart Russell and Peter Norvig, which comes recommended by the Machine Intelligence Research Institute of San Francisco. It is a really in-depth introduction to the topic, with very few prerequisites. Plus it has tons of well-thought exercises.
Coefficient Matrix of $T:\mathbb{R}^{3}\rightarrow\mathbb{R}$
Write $\vec x$ in components $\vec x = (x_1, x_2, x_3)^t$, a column vector. Then $T(\vec x) = x_1u_1 + x_2u_2 + x_3u_3$. (Note that there should be no vector over arrows on the component values $u_i$.) As $x_1u_1 + x_2u_2 + x_3u_3$ is a real number, we can also think of as a $1 \times 1$ matrix. Since $\vec x$ is a $3 \times 1$ matrix, the matrix representing $T$ must be a $1 \times 3$ matrix. Can you write it down now?
Necessary and sufficient conditions for a quadratic polynomial with complex coefficients to have both roots with negative real parts
Let us separate it into two cases : Case 1 : When $\Re(a^2-4b)+|a^2-4b|=0$, i.e. $\Re(a^2-4b)\le 0$ and $\Im(a^2-4b)=0$, the roots of $z^2+az+b=0$ are given by $$z_1,z_2=\frac{-(\Re(a)+i\Im(a))\pm i\sqrt{4b-a^2}}{2}$$ from which $$\Re(z_1)\lt 0\quad\text{and}\quad \Re(z_2)\lt 0\iff -\frac{\Re(a)}2\lt 0\iff \Re(a)\gt 0$$follows. Case 2 : When $\Re(a^2-4b)+|a^2-4b|\not=0$, let us use the following lemma (the proof for the lemma is written at the end of the answer) : Lemma : Let $\Delta :=t^2-4su$. If $s\not=0$ and $\Re(\Delta)+|\Delta|\not=0$, then the roots of $$sx^2+tx+u=0$$ are given by $$x=-\frac{t}{2s}\pm\frac{\Delta+|\Delta|}{2\sqrt 2\ s\sqrt{\Re(\Delta)+|\Delta|}},$$ i.e. $$x=-\frac{t}{2s}\pm \left(\frac{\sqrt{\Re(\Delta)+|\Delta|}}{2\sqrt 2\ s}+i\frac{\Im(\Delta)}{2\sqrt 2\ s\sqrt{\Re(\Delta)+|\Delta|}}\right)$$ From the lemma, the roots of $z^2+az+b=0$ are given by $$\small z_1,z_2=-\frac{\Re(a)+i\Im(a)}{2}\pm\left(\frac{\sqrt{\Re(a^2-4b)+|a^2-4b|}}{2\sqrt 2}+i\frac{\Im(a^2-4b)}{2\sqrt 2\sqrt{\Re(a^2-4b)+|a^2-4b|}}\right)$$ from which $$\begin{align}&\Re(z_1)\lt 0\qquad\text{and}\qquad \Re(z_2)\lt 0 \\\\&\small\iff -\frac{\Re(a)}{2}+\frac{\sqrt{\Re(a^2-4b)+|a^2-4b|}}{2\sqrt 2}\lt 0\quad\text{and}\quad -\frac{\Re(a)}{2}-\frac{\sqrt{\Re(a^2-4b)+|a^2-4b|}}{2\sqrt 2}\lt 0 \\\\&\iff \frac{\Re(a)}{2}\gt \frac{\sqrt{\Re(a^2-4b)+|a^2-4b|}}{2\sqrt 2} \\\\&\iff \sqrt 2\ \Re(a)\gt \sqrt{\Re(a^2-4b)+|a^2-4b|} \\\\&\iff \Re(a)\gt 0\qquad \text{and}\qquad 2(\Re(a))^2\gt \Re(a^2-4b)+|a^2-4b|\end{align}$$ follows. From the two cases, a necessary and sufficient condition is $$\color{red}{\Re(a)\gt 0\qquad \text{and}\qquad 2(\Re(a))^2\gt \Re(a^2-4b)+|a^2-4b|}$$ Finally, let us prove the lemma. Proof for the lemma : We get $$\begin{align}&s\left(-\frac{t}{2s}\pm\frac{\Delta+|\Delta|}{2\sqrt 2\ s\sqrt{\Re(\Delta)+|\Delta|}}\right)^2+t\left(-\frac{t}{2s}\pm\frac{\Delta+|\Delta|}{2\sqrt 2\ s\sqrt{\Re(\Delta)+|\Delta|}}\right)+u \\\\&=s\left(\frac{t^2}{4s^2}\mp\frac{t(\Delta+|\Delta|)}{2\sqrt 2\ s^2\sqrt{\Re(\Delta)+|\Delta|}}+\frac{(\Delta+|\Delta|)^2}{8s^2(\Re(\Delta)+|\Delta|)}\right) \\\\&\qquad\qquad\qquad\qquad -\frac{t^2}{2s}\pm\frac{t(\Delta+|\Delta|)}{2\sqrt 2\ s\sqrt{\Re(\Delta)+|\Delta|}}+u \\\\&=\frac{t^2}{4s}\mp\frac{t(\Delta+|\Delta|)}{2\sqrt 2\ s\sqrt{\Re(\Delta)+|\Delta|}}+\frac{(\Delta+|\Delta|)^2}{8s(\Re(\Delta)+|\Delta|)}\\\\&\qquad\qquad\qquad\qquad -\frac{t^2}{2s}\pm\frac{t(\Delta+|\Delta|)}{2\sqrt 2\ s\sqrt{\Re(\Delta)+|\Delta|}}+u \\\\&=\frac{-\Delta}{4s}+\frac{(\Delta+|\Delta|)^2}{8s(\Re(\Delta)+|\Delta|)} \\\\&=\frac{-(\Delta+\overline{\Delta})\Delta+\Delta^2+\Delta\overline{\Delta}}{8s(\Re(\Delta)+|\Delta|)} \\\\&=0\qquad\quad\square\end{align}$$
Is there a function thats not in Big O and not in Big Omega?
Here is a continuous and strictly increasing example: Let $g(y) = \sin^2(y)+y$ and $$ f(x)= e^{\textstyle e^{g(\log \log x)}} $$ for $x>1$. Then $g$ and therefore $f$ is strictly increasing, and $$ x = e^{\textstyle e^{\log\log x}} \le f(x) \le e^{\textstyle e^{1+\log\log x}} = e^{e \log x} = x^e $$ where for each of the inequalities there are arbitrarily large $x$ that makes it $=$, and therefore $f$ is neither $O(x^2)$ nor $\Omega(x^2)$.
Differentiate $y^{1-n}=z$?
Based on your recent question, I assume the reason you are asking this is because you are solving a Bernoulli differential equation. When solving these, we assume that $y$ and $z$ are functions of $x$. Therefore, you should not be considering $y$ as a constant, but you should be considering $y:=y(x)$, $z:=z(y)$. $$z:=z(y), y:=y(x) \implies z:=z(y(x))$$ Therefore, it becomes evident that we should use the chain rule! In Leibniz Notation, this is: $$\frac{dz}{dx}=\frac{dz}{dy}\cdot \frac{dy}{dx} \tag{1}$$ We can find $\frac{dz}{dy}$ by using the Power Rule of Differentiation: $$\frac{d}{dy}(y^\alpha)=\alpha y^{\alpha-1}$$ Hence, in our case where $\alpha=1-n$, this would be: $$\frac{dz}{dy}=\frac{d}{dy}\left(y^{1-n}\right)=(1-n)y^{1-n-1}=(1-n)y^{-n}$$ Substituting into $(1)$, we get the result required: $$\bbox[5px,border:2px solid #C0A000]{\frac{dz}{dx}=(1-n)y^{-n}\cdot \frac{dy}{dx}}$$
Closed/compact form solution for $\int_{0}^\infty\int_{x_1/z}^\infty \frac{e^{-y_1(z+1)}}{1+z-\frac{x_1}{y_1}}\,dy_1\,dx_1$
Hint: for $x_1$ fixed, set $t=\frac{y_1}{x_1}$ in the inner integral. Then, \begin{align*}\int_{x_1=0}^{\infty}\int_{y_1=\frac{x_1}{z}}^{\infty} \frac{e^{-y_1(z+1)}}{1+z-\frac{x_1}{y_1}}\,dy_1\,dx_1&=\int_{x_1=0}^{\infty}\int_{t=\frac{1}{z}}^{\infty} \frac{x_1e^{-tx_1(z+1)}}{1+z-\frac{1}{t}}\,dt\,dx_1\\ &=\int_{t=\frac{1}{z}}^{\infty}\frac{1}{1+z-\frac{1}{t}}\int_{x_1=0}^{\infty}x_1e^{-tx_1(z+1)}\,dx_1\,dt\\ &=\int_{t=\frac{1}{z}}^{\infty}\frac{1}{1+z-\frac{1}{t}}\frac{1}{t^2(z+1)^2}\,dt. \end{align*}
Complex number (square) root notation.
In general the equation $$z^2=w$$ has two (complex) solutions $z_1$ and $z_2=-z_1$. In the case $w$ is real and positive, the solutions are real. Thus exactly one solution is positive and one solution is negative. It is then possible to assign $\sqrt{w}$ to the positive solution. Return to the general complex case, $\Bbb C$ cannot be equipped with an order as $\Bbb R$. The two solutions $z_1$ and $z_2$ are now more equivalent. It is then not natural, even impossible, to assign $\sqrt{w}$ to either of the two solutions.
Tower number is a regular uncountable cardinal
If $\alpha_n$ is a cofinal sequence, the collection of $A_{\alpha_n}$ is also a tower. If $\{A_n: n \in \omega\}$ is a countable tower, we can take $b_1\in A_1$, $b_2\in A_1\cap A_2$,etc. $B = \{b_n\}$ is then almost contained in every $A_n$, so the tower is not maximal
Using the dense subset property and darboux's definition
The issue is that $m_{i}=\ln 1=0$, $M_{i}=\ln\sqrt{3}$ so $L(f,P)=0$, $U(f,P)=\ln\sqrt{3}$. As @Berci has noted, there is also an issue on the $\sup,\inf$, they are numbers, not sets.
Integrate $\frac{x^2-1}{x^2+1}\frac{1}{\sqrt{1+x^4}}dx$
$$\int \frac{x^2-1}{x^2+1}\frac{1}{\sqrt{1+x^4}}dx$$ $$=\int\frac{1-\dfrac1{x^2}}{\left(x+\dfrac1x\right)\sqrt{\left(x+\dfrac1x\right)^2-2}}dx$$ Using Trigonometric substitution , set $\displaystyle x+\dfrac1x=\sqrt2\sec\phi$
How to get the follow integration formula by substitution
Just perform the substitution: $$y = \frac wu \Rightarrow \frac{\mathrm du}{\mathrm dy} = -\frac w{y^2}$$ and $u = 1 \Rightarrow y = w, u \to 0 \Rightarrow y \to \infty$. This gives $$f_W(w) = \frac1\mu \int_{\infty}^w \frac{w y^2}{w^2} f_m(y) \cdot -\frac w{y^2} \;\mathrm dy = -\frac1\mu \int_{\infty}^w \frac{y^2}w \cdot \frac w{y^2} f_m(y)\;\mathrm dy = \frac1\mu \int_w^\infty f_m(y)\;\mathrm dy$$ as claimed.
What's the difference between $\frac{\delta}{dt}$ and $\frac{d}{dt}$?
We have a very good explanation on the $\delta$ notation in the Chapter 6 of the book "Classical Dynamics" by Thornton and Marion. Despite the fact that it is a book for physicists you will see the explanation of such a notation by means of the calculus of variations. It's difference from the other simbols for derivatives is also covered there.
How to prove that this integral approaches $\pi$ as $\epsilon \rightarrow 0$
Take $\epsilon_n>0$ such that $\epsilon_n \to 0$ Define $g_n(t)=e^{-\epsilon_n\sin{t}}e^{i\epsilon_n\cos{t}}$ Then $g_n(x) \to 1$ and $|g_n(x)| \leq 1 \in L^1[0,\pi]$ since $\sin{t} \geq 0$ on $[0,\pi]$ So by dominated convergence you have the limit.
Proof of $f = g \in L^1_{loc}$ if $f$ and $g$ act equally on $C_c^\infty$
Like Theo said, this is a standard question whose standard answer (as far as I know) uses smoothing by convolution: take an arbitrary compact $K \subset \mathbb{R}^n$, regularize its characteristic function $\chi_K$ by means of a mollifier sequence and use the hypothesis to show that $\int_K \big(f(x)-g(x)\big)\ dx=0$. I'd like to provide another proof which doesn't rely on convolution (at least not explicitly). The idea behind it is precisely the same, though. We need two lemmas. Lemma 1 If a Borel measurable function $f \colon \mathbb{R}^n \to \mathbb{R}$ is such that, for every (compact) rectangle $R$, $$\int_R f(x)\,dx=0,$$ then $f=0$ almost everywhere. Proof Start by recording that the hypotheses implicitly tell us $f \in L^1_{\rm{loc}}(\mathbb{R}^n)$ (click). It is a known fact (see, e.g., Rudin's Real and complex analysis, §2.19) that every open subset of $\mathbb{R}^n$ can be written as the union of a countable collection of rectangles intersecting only at the boundary. So the $\sigma$-algebra generated by the collection of the rectangles is the whole Borel $\sigma$-algebra. Fix $M >0$ and consider the rectangle $R_M=[-M, M] \times \ldots \times [-M, M]$. The family $$\mathcal{A}_M=\left\{ B \subset R_M \mid \int_B f(x)\,dx\ \text{makes sense and vanishes}\right\}$$ is a $\lambda$-system (click) of subsets of $R_M$, because $R_M \in \mathcal{A}_M$; for every $A, B \in \mathcal{A}_M\ \mathrm{s.t.}\ A \subset B$, we have $$\int_{B-A}f(x)\,dx=\int_B f(x)\, dx-\int_A f(x)\, dx=0;$$ if $A_1 \subset A_2 \subset \ldots \in \mathcal{A}_M$ then, once put $A=\cup_k A_k$, we have $$\int_A f(x)\, dx= \lim_{k \to \infty} \int_{A_k} f(x)\, dx=0$$ by dominated convergence: in fact $\lvert f\chi_{A_k} \rvert \le \lvert f \chi_{R_M}\rvert \in L^1(\mathbb{R}^n)$ (recall that $f \in L^1_{\rm{loc}}$). $\mathcal{A}_M$ contains all rectangles of $\mathbb{R}^n$ contained in $R_M$ and those obviously form a $\pi$-class (a class of subsets closed for finite intersections): now by the $\pi$-$\lambda$ theorem (click) we can conclude that $\mathcal{A}_M$ contains all Borelian subsets of $R_M$ and thus $f$ vanishes almost everywhere on it. Since $M$ was arbitrary, we infer that $f$ vanishes almost everywhere on $\mathbb{R}^n$. $\square$ The other lemma we need is the existence of bump functions: Lemma 2 Let $R=[a_1, b_1] \times \ldots \times [a_n, b_n], \varepsilon > 0, R_{\varepsilon}=[a_1-\varepsilon, b_1+\varepsilon] \times \ldots \times [a_n - \varepsilon, b_n + \varepsilon]$. Then there exists a function $b\in C^\infty_c(\mathbb{R}^n)$ s.t. $0\le b \le 1,\ b \equiv 1$ on $R$ and $\mathrm{supp}(b)\subset R_\varepsilon$. Proof For each $j$ let $$\varphi^{-}_j(t)=\begin{cases} e^{\frac{1}{(a_j-\varepsilon - t) (t-a_j)}} & t \in (a_j-\varepsilon, a_j) \\ 0 & \text{otherwise} \end{cases}.$$ Then put $\psi^{-}(s)=C^{-}_j\int_{a_j-\varepsilon}^s\varphi^{-}(t)\,dt$ where $C_j$ is a costant chosen so that $\psi\le 1$. Now do the same for $b_j$: let $$\varphi^{+}_j(t)=\begin{cases}e^{\frac{1}{(b_j- t) (t-b_j-\varepsilon)}} & t \in (b_j, b_j+\varepsilon) \\ 0 & \text{otherwise} \end{cases}$$ and $\psi^{+}_j(s)=C^{+}_j\int_{b_j+\varepsilon}^s \varphi^{+}(t)\,dt.$ The function $b_j = \psi_j^-\psi_j^+$ is $C^\infty$ and satisfies $0 \le b_j \le 1, b_j \equiv 1$ on $[a_j, b_j]$ and $\mathrm{supp}(b_j) \subset [a_j-\varepsilon, b_j + \varepsilon]$. So $b(x_1 \ldots x_n)=\prod_j b_j(x_j)$ is what we were looking for. $\square$ With those lemmas it is pretty easy to prove the original claim. In fact, let $f, g$ as in the hypothesis and put $F=f-g$. Take a rectangle $R$ and, for every $n\in \mathbb{N}$, a bump function $b_n$ separating $R$ from $R_{n^{-1}}$. Then $b_n \to \chi_{R}$ pointwise, so that $Fb_n \to F\chi_R$ pointwise; also $\lvert Fb_n \rvert \le \lvert F \chi_{R_1}\rvert \in L^1(\mathbb{R}^n)$. By the dominated convergence theorem, we get $$\int_R F(x)\,dx=0.$$ By Lemma 1, $F\equiv 0$ almost everywhere.
Neal Koblitz used this limit in his calculation of $\zeta(2k)$
Expanding with Taylor series $\sin(X) = X(1 + O(X^2))$, we have $$ \sin^2(x\pi/(2k+1)) = (x\pi/(2k+1))^2 (1+O(1/k^2)) ,$$ $$ \sin^2(r\pi/(2k+1)) = (r\pi/(2k+1))^2 (1+O(r^2/k^2)) ,$$ so \begin{align} 1-\frac{\sin^2(x\pi/(2k+1))}{\sin^2(r\pi/(2k+1))} &= 1-\frac{(x\pi/(2k+1))^2}{(r\pi/(2k+1))^2} (1 + O(r^2/k^2)) \\&= 1-\frac{(x\pi/(2k+1))^2}{(r\pi/(2k+1))^2} + O(1/k^2) \\&= (1-x^2/r^2)(1 + O(1/k^2)) \end{align} where the implied constants in the "big-Oh"s are allowed to depend upon $x$. But $$ \log \left(\prod_{r=1}^k (1 + O(1/k^2))\right) = \sum_{r=1}^k O(1/k^2) = O(1/k) \to 0 $$ as $k \to \infty$.
Intuitively, why is compounding percentages not expressed as adding percentages?
Say something costs $\$100$ and the price goes up by $50\%$. $50\%$ of $\$100$ is $\$50$, so the new price is $\$150$. Then it goes up by $50\%$ again. $50\%$ of $\$150$ is $\$75$, so the new price is $\$225$. The point is that the second time, you're taking $50\%$ of a larger quantity.
who were the amateur mathematicians had won the field medal?
One who is not a professional mathematician is Edward Witten, who is a physicist.
Set of two cycles generate $S_n$
The inclusion $\subset$ is correctly justified. The other way ($\supset$) can be proved by giving an explicit construction of $$S_n \ni \pi = (1\ i_1)(1\ i_2)\ldots(1\ i_k)$$ Where $k$ is finite, $\pi$ is a given permutation and the $i_j$ can be chosen. For this, look at products $(1\ i_k)\ldots(1\ i_1)(1\ i_k)$. What are they? For example $(1\ 2)(1\ 3)(1\ 2) = \;?$ Observe the following: $$(1\ i_k)\ldots(1\ i_1)(1\ i_k) = (i_1\ i_2\ \ldots\ i_k)$$ So we can generate any cycle with this construction. Since every permutation is a product of disjoint cycles, we can decompose a permutation into its cycles and apply the construction on each cycle. The product of these is a representation of $\pi$ as a product of the swaps $(1\ i)$ for some $i$. For example $\pi = (1 3 2)(4 5) = (12)(13)\cdot (1 5)(1 4)(1 5) = (12)(13)(15)(14)(15)$
Is the entrywise nonnegative part of a real positive semidefinite matrix still positive semidefinite?
HINT: The answer is No. I will tell you why I thought the answer is no, then I will tell you how to find a counterexample. There is a related question about positive semidefinite matrices, whether taking the absolute value entrywise keeps us in the positive-semidefinite domain. The answer is: only if the dimension is not larger than $3$. Now, if this one were true, the answer to your question would also be Yes. So we suspect the answer to be no. How to look for counterexamples. I found one of size $5\times 5$. The trick is to produce a large enough supply of positive semidefinite matrices. This you do by first producing random symmetric matrices ( $ b = a + a^{t}$), then taking the exponential. Eventually you will hit a counterexample. An explicit counterexample \begin{eqnarray} \left( \begin{array}{ccccc} 189.79 & 5.37843 & -122.669 & -214.584 & 122.596 \\ 5.37843 & 17.4416 & 3.21858 & -20.9122 & 13.1482 \\ -122.669 & 3.21858 & 83.255 & 133.105 & -75.7694 \\ -214.584 & -20.9122 & 133.105 & 255.536 & -146.986 \\ 122.596 & 13.1482 & -75.7694 & -146.986 & 84.6935 \\ \end{array} \right) \end{eqnarray}
Proof that an open set is pathwise connected if and only if it is connected (complex analysis)
The first part shows one direction: if $\Omega$ is path-connected then $\Omega$ is connected. This also holds for non-open $\Omega$, and in fact in any topological space. It needs that $[0,1]$ is connected (a fact you’ve been asked to reprove essentially in this exercise), and thus if $p:[0,1] \to \Omega$ is the path from $w_1$ to $w_2$, then $\Omega_1 \cap p[[0,1]], \Omega_2 \cap p[[0,1]]$ is a non-trivial disconnection of the connected space $p[[0,1]]$ (by continuity), and this contradiction shows that $\Omega$ cannot be disconnected and hence is connected. The second part uses that if $z \in \Omega_1$, there is a whole ball $B(z,r) \subseteq \Omega$ and then all points in $B(z,r)$ can also be reached by a path (inside $\Omega$) from $w$, just like $z$, by taking as the last stage the line segment from $z$ to such a point (which lies inside the (convex!) ball), and so even $B(z,r) \subseteq \Omega_1$ and so the latter set is open. The argument for $\Omega_2$ being open uses a similar idea with straight line segments, to show that if $z \in \Omega_2$ (so cannot be reached) any $B(z,r) \subseteq \Omega$ also consists of unreachable points so that $B(z,r) \subseteq \Omega_2$. Connectedness then implies one of the $\Omega_i$ is empty and $\Omega_1$ is not (constant path to $w$ shows $w \in \Omega_1$) etc.
Relation between continuity and weak star continuity
Just from general considerations, condition 3 is the strongest: every open set in the larger (i.e. the strong) topology has an inverse image that is in the smaller topology (i.e. the weak$^\ast$ topology). We just use that we have a map $T$ between a space that has a smaller and a larger topology, no more; assume $T$ is 3-continuous: It implies 1: if $O$ is open in the smaller topology, it is open in the larger one, so by 3, $T^{-1}[O]$ is open in the smaller one. So it's 1-continuous. It implies 2: if $O$ is open in the smaller topology, it is open in the larger one, so 3 impies $T^{-1}[O]$ is open in the smaller one, so also in the larger one. So it is 2-continuous. It implies 4: if $O$ is open in the larger topology, 3 implies that $T^{-1}[O]$ is open in the smaller topology, so also in the larger one, so $T$ is 4-continuous. Similarly, if $T$ is 4-continuous, it is 2-continuous. If $T$ is 1-continuous, it is 2-continuous. I don't see any others that just follow from inclusion of the two topologies. I'm not sure about counterexamples to other implications, my functional analysis is rusty...
State diagram of DFA
"Final state" is a confusing term. It does not mean that the state is actually the last one, or that the automaton stops when it reaches the final state. It means that if the automaton reaches that state at the end of the input, then the automaton will accept the input. The automaton might reach a final state at the end of the input, or it might not. For this reason some people prefer to call final states "accepting states". The situation is further confused because the answer you are given for the left-hand automaton is wrong. The automaton pictured there accepts strings with exactly two as, not strings with exactly three as. I will suppose that the question was misprinted, and should say $$\{w \mid \text{$w$ has exactly $\bf{two}$ $\mathtt{a}$'s}\}.$$ The left-hand automaton must accept strings with exactly two as, and it must also reject strings with fewer than 2 as or with more than 2 as. After reading any string with zero as, the automaon will be in the leftmost state. After reading any string with exactly one a, it will be in the second state. After reading any string with exactly two as it will be in the third state, which is an accepting state, so it will accept any such string. But what does it do if the input string contains three or more as? It must do something; the definition of an automaton says that there must be a transition function which says what the new state is for any previous state and any input symbol. So there must be transitions out of the third state for both a and b. A third a takes the automaton into the fourth state, and once it is in that state it stays there until the input is completely read. Then it rejects the input, because the input had more than two as. The second automaton does not accept aaaa, because each a takes it around the loop leading from the initial state back to the initial state. After reading aaaa it finishes in the same state it started in; because this is not an accepting state, the automaton does not accept the state aaaa.
Expectation of Inverse Normal CDF
This diverges to $-\infty.$ Write $$\mathbb{E}[\mu/\Phi(\mu)] = \mathbb{E}[1_{\{\mu\geq 0\}}\mu/\Phi(\mu)] + \mathbb{E}[1_{\{\mu< 0\}}\mu/\Phi(\mu)].$$ If $\mu\geq 0,$ we have $\Phi(\mu)\geq 1/2$ and hence $\mu/\Phi(\mu)\leq 2\mu.$ Hence, the first term can be bounded: $$\mathbb{E}[1_{\{\mu\geq 0\}}\mu/\Phi(\mu)] \leq 2\mathbb{E}[1_{\{\mu\geq 0\}}\mu]<\infty.$$ For the second term we use the following bound for $t<0$: $$\Phi(t) < \frac{1}{-t\sqrt{2\pi}}e^{-t^2/2}.$$ See for example here: https://www.johndcook.com/blog/norm-dist-bounds/. Hence, $$\mathbb{E}[1_{\{\mu< 0\}}\mu/\Phi(\mu)]\leq \mathbb{E}\left[1_{\{\mu< 0\}}\frac{\mu}{\frac{1}{-\mu\sqrt{2\pi}}e^{-\mu^2/2}}\right] = -\sqrt{2\pi}\cdot\mathbb{E}\left[1_{\{\mu< 0\}}e^{\mu^2/2}\right] = -2\sqrt{2\pi}\mathbb{E}\left[e^{\mu^2/2}\right].$$ The final two here comes from symmetry, allowing us to drop the indicator. Now note that the RHS diverges to $-\infty.$ The expectation of the right hand side is the moment generating function at $1/2$ for $\mu^2.$ $\mu^2$ follows a $\chi^2_1$ distribution. As you can see here https://stats.stackexchange.com/questions/7278/finding-the-moment-generating-function-of-chi-squared-distribution, the moment generating function for a $\chi^2_1$ random variable diverges to infinity at $1/2$. Hence, we now obtain $$\mathbb{E}[1_{\{\mu< 0\}}\mu/\Phi(\mu)] \leq -\infty.$$ In particular, $$\mathbb{E}[\mu/\Phi(\mu)] = -\infty.$$
For $H$ the orthocenter of $\triangle ABC$, prove $|AH|h_a+|BH|h_b+|CH|h_c=\frac12\left(a^2+b^2+c^2\right)$
If $A',B',C'$ are feet of the altitudes, from $ABA'\sim AHC'$ we have $AH\cdot h_a= AB\cdot AC'$. Similarly, from $BAB'\sim BHC'$, $BH\cdot h_b= AB\cdot BC'$ follows. By adding we have $AH\cdot h_a+BH\cdot h_b=AB(AC'+BC')= AB^2=c^2$. By symmetry, $BH\cdot h_b+CH\cdot h_c= a^2$ and $AH\cdot h_a+CH\cdot h_c= b^2$. Now just add these three equations.
Definition of higher complex integral over polydisc
Let's stay in dimension $n=2$ as you are and let me rename the variables to $z_1$ and $z_2$. The so-called "distinguished boundary" of the disk is the 2 real dimensional submanifold $S^1 \times S^1$ (a torus) (product of the two boundaries of the discs). When integrating over this, you have to use something like differential forms, or you just think of it as an iterated integral as it is a product manifold. The differential form you integrate over it is $dz_1 \wedge dz_2$, where $z_j = x_j + i \, y_j$, so $dz_j = dx_j +i \, dy_j$. These are one forms, so $dz_1 \wedge dz_2$ is a 2 form so integrates over a 2 real dimensional surface. I would suggest a book on differential forms, such as Spivak's calculus on manifolds or similar. Now the tricky bit for complex analysis is the orientation in higher dimension. There is a standard orientation of $\mathbb C$: if $z= x+i \,y$, then the orientation is the ordering $(x,y)$. But in $\mathbb C^2$, we have two possibilities $(x_1,y_1,x_2,y_2)$ or $(x_1,x_2,y_1,y_2)$. Neither is totally standard, the trick is to just pick one and stay consistent in your calculations. The distinguished boundary is then simply oriented in such a way so that the Cauchy formula comes with a plus sign, not a minus sign. However, if the worst thing you will integrate is the distinguished boundary of a polydisc, such as in Cauchy's theorem it is entirely fine to just use the "iterated integral" to think of what it is. I mean underneath, whenever you integrate over a submanifold, you in the end write the overall integral as a bunch of iterated integrals using something like a partition of unity. For the torus we are lucky that we can just write it as a single rather simple iterated integral. If you don't like the path integral there, just think of the arguments $\theta_1$ and $\theta_2$, and write the integral with respect to $d\theta_1$ and $d\theta_2$, which are then just two simple calculus integrals from $0$ to $2\pi$. In the end, all these integrals are computed by writing a bunch of Calculus I integrals.
Differentiability of $f(x,y) = | |x| - |y| | - |x| - |y|$ at $(0,0) \in \mathbb{R}$
Suppose that $f$ is differentiable at $(0,0)$ and consider the map $g\colon\Bbb R\longrightarrow\Bbb R^2$ defined by $g(x)=(x,x)$, which is differentiable. So, if $f$ was differentiable at $(0,0)$, then $f\circ g$ would be differentiable at $0$. But$$(\forall x\in\Bbb R):f\bigl(g(x)\bigr)=-2|x|.$$
Algebraic independence
Nope. Let $r\in K$ and $L/K$. Define $f(x,y)=0x+1y-r$ (it's nonzero). Then $f(x,r)=0$ for any value $x\in L$, so $\{x,r\}$ is algebraically dependent over $K$ for any $x\in L$. Thus so is any $\{\cdots,x,r\}$.
Poisson and Binomial distributions question
If $X$ denotes the number of customers entering the store in $2$ hours then $X\sim\mathsf{Poisson}(\lambda=6)$. Splitting up $X=X_1+X_2$ where $X_i$ stands for the number of customers that buy $i$ items it can be noticed that $X_1,X_2$ are iid and $\sim\mathsf{Poisson}(\lambda=3)$. In this situation to be found is expression for:$$P(X_1+2X_2=k)$$ So something like: $$\sum_{j=0}^{\lfloor k/2\rfloor} P(X_2=j)P(X_1=k-2j)=e^{-6}\sum_{j=0}^{\lfloor k/2\rfloor}\frac{3^{k-j}}{j!\left(k-2j\right)!}$$
Is it valid to subtract $1.\overline{9}$ - $0.\overline{9}$?
No US law is against what I'm about to write. Write $$ 1.\bar{9} = 1 + \sum_{i=1}^\infty \frac{9}{10^i},$$ $$0.\bar{9}= \sum_{i=1}^\infty \frac{9}{10^i}.$$ These are convergent series, so finite. Hence $$ 1.\bar{9} - 0.\bar{9} = 1 + \sum_{i=1}^\infty \frac{9}{10^i} - \sum_{i=1}^\infty \frac{9}{10^i} = 1. $$ Edit: To round out this answer (and hopefully maybe actually answer your question) the answer is yes you can do what you wrote. You need to be careful about these kinds of things. Here's an example to show that you need to be careful with doing algebra with series. Think about the series $$S = 1 + 2 + 3 + 4 + \cdots. $$ Let's group terms up, so $$ S = 1 + (2 + 3 + 4) + (5 + 6 + 7) + \cdots.$$ Based on how we grouped things up, we can simplify, $$ S = 1 + 9 + 18 + 27 + 36 + \cdots = 1 + 9(1 + 2 + 3 + 4 + \cdots) = 1 + 9S.$$ Now I'm going to rearrange things $$ -8S = 1 \implies S = -\frac{1}{8}.$$ So by this argument, $$ 1 + 2 + 3 + 4 + \cdots = -\frac{1}{8}.$$ This is not correct. I'm doing algebra with a divergent series. Now let's think about a conditionally convergent series. Take $$ S = \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}.$$ Can I do the same kind of algebra you did in your post with this series? It's convergent, so maybe if you (naively) trust what I wrote you might think yes. However, it is a conditionally convergent series, so I can do some rearrangements to the series to get any value. See https://mathworld.wolfram.com/RiemannSeriesTheorem.html (the link outlines how to get two different answers with this series). The moral of the story is that it's good you're being careful about this. Doing these kinds of "tricks" can lead to incorrect results if you're not dealing with an absolutely convergent series.
Generate a random direction within a cone
I'm surprised how many bad, suboptimal and/or overcomplicated answers this question has inspired, when there's a fairly simple solution; and that the only answer that mentions and uses the most relevant fact, Christian Blatter's, didn't have a single upvote before I just upvoted it. The $2$-sphere is unique in that slices of equal height have equal surface area. That is, to sample points on the unit sphere uniformly, you can sample $z$ uniformly on $[-1,1]$ and $\phi$ uniformly on $[0,2\pi)$. If your cone were centred around the north pole, the angle $\theta$ would define a minimal $z$ coordinate $\cos\theta$, and you could sample $z$ uniformly on $[\cos\theta,1]$ and $\phi$ uniformly on $[0,2\pi)$ to obtain the vector $(\sqrt{1-z^2}\cos\phi,\sqrt{1-z^2}\sin\phi,z)$ uniformly distributed as required. So all you have to do is generate such a vector and then rotate the north pole to the centre of your cone. If the cone is already thus centred, you're done; if it's centred on the south pole, just invert the vector; otherwise, take the cross product of the cone's axis with $(0,0,1)$ to get the direction of the rotation axis, and the scalar product to get the cosine of the rotation angle. Or if you prefer you can apply your idea of generating two orthogonal vectors, in the manner Christian described.
Equation of Variation y with x
If $y - x = 1/x - 1/y$, then we can rearrange as follows: \begin{align*} xy(y - x) &= xy(1 / x - 1 / y) \\ \implies xy(y - x) &= y - x \\ \implies (xy - 1)(y - x) &= 0 \\ \end{align*} So either $xy = 1$ (ie $x = 1/y$, which is inverse variation), or $y = x$, which is direct variation. Indeed you can check that both of these satisfy the equation. So the answer is both! If you plot this equation, you get something like this:
Pointwise convergence of series $\sum_{n=1}^{\infty} \log\left (1+\frac{x}{n^3}\right)$
For $x \in (-1,1)$ you have$$\log \left(1+\frac{x}{n^3} \right) \leq \frac{x}{n^3}.$$ Consider the function $f(x) = \log(1+x)-x$. You have $f(0)=0$ and $f'(x) = -\frac{x}{1+x}$. So for $x>0$ your derivative $f'(x)$ is negative. For $-1<x<0$ you have $1+x > 0$, and so $f'(x)=-\frac{x}{1+x} > 0$. This means the function $f$ has a maximum at $x=0$. So $f(x) \leq 0$ for any $x\in (-1,1)$ in particular, that is $$\log (1+x) \leq x, \quad x \in (-1,1).$$ This prevents our series from diverging to $+\infty$. Further, observe that for $x \in (-1,0)$ $$\lim_{n \rightarrow +\infty} \frac{\log \left(1+\frac{x}{n^3} \right)}{\frac{x}{n^3}} = 1.$$ Thus if you fix $\varepsilon > 0$ there is a natural number $N$ such that $$(1-\varepsilon)\frac{x}{n^3} < \log \left(1+\frac{x}{n^3} \right) < (1+\varepsilon)\frac{x}{n^3}$$ for any $n \ge N$. Now fix $\varepsilon > 0$ such that $1-\varepsilon > 0$. Then there is $N$ such that $$-\infty < (1-\varepsilon)x\sum_{n=N}^{+\infty}\frac{1}{n^3} < \sum_{n=N}^{+\infty}\log \left(1+\frac{x}{n^3} \right) < +\infty.$$ All of this prevents our series from diverging to $-\infty$ as well.
Changing of integration and operator
This answer will first formulate a more precise statement of the special case that came out in the comments, and then show why it is true. Suppose that $H$ is a (real) Hilbert space of real-valued functions on $\mathbb R$ with $L^2$ inner product. Suppose that $g:\mathbb R\times \mathbb R\to\mathbb R$ is a function such that for all $u\in \mathbb R$, $g(\cdot, u)$ is in $H$, and for all $f\in H$, $f(u)=\langle f,g(\cdot,u)\rangle$. Claim: If $T$ is a symmetric linear operator on $H$, then for all $f\in H$ and $u\in \mathbb R$, $(Tf)(u)=\langle f,Tg(\cdot,u)\rangle$. Proof: $(Tf)(u)=\langle Tf,g(\cdot,u)\rangle=\langle f,T^*g(\cdot,u)\rangle=\langle f,Tg(\cdot,u)\rangle,$ because $T=T^*$.
Why does complex conjugation permute the rows (columns) of a character table
I'm assuming we are talking about finite groups. Then we have: $\rho(g)$ is orthogonal, therefore diagonalizable. Since $\rho(g)^{|G|} = 1$ the eigenvalues of $\rho(g)$ are roots of 1, this means that their inverses, which are the eigenvalues of $\rho(g^{-1})$ are their conjugates and therefore $\overline{\chi(g)}$ is $\chi(g^{-1})$
A tough question on meromorphic functions- Conway
I am not sure if this follows directly from the Casorati-Weierstraß theorem, but I think that it can be proved in a similar way. The statement For each $w\in \mathbb{C}$ there is a sequence {$z_n$} in $D_r(z_0)$ \ {$z_0$} such that $\lim_{n\rightarrow \infty}z_n=z_0$ and $\lim_{n\rightarrow \infty}f(z_n)=w$ can be equivalently formulated as For each $w\in \mathbb{C}$ and for each $\varepsilon > 0$ there is a $z \in D_r(z_0) \setminus {z_0}$ with $\left| z-z_0 \right| < \varepsilon $ and $\left| f(z) - w_0 \right| < \varepsilon $. Let us assume that this is false. Then we have There is a $w_0 \in \mathbb C$ and an $\varepsilon_0 > 0$ such that $\left| f(z) - w_0 \right| \ge \varepsilon_0 $ for all $z \in D_r(z_0) \setminus {z_0}$ with $\left | z-z_0 \right| < \varepsilon_0 $. Then the function $g := 1/(f - w_0)$ is holomorphic and bounded in a punctured disk with center $z_0$ and therefore has a removable singularity at $z_0$. Therefore $f = w_0 + 1/g$ has a meromorphic extension to $U_{\varepsilon_0}(z_0)$. This contradicts the assumption that the poles of $f$ accumulate at $z_0$.
How Jaccard similarity can be approximated with minhash similarity?
It is not that $h(S_1)=h(S_2)$ is true; in fact $h(S_1)=a$ and $h(S_2)=c$, as explained in the example at pag. 80, where $h$ is the minhash function associated to the permutation $\{abcde\}\mapsto \{beadc\}$ and $S_1$ resp. $S_2$ are given in figure 3.3. What is true is that the probability of having $h(S_1)=h(S_2)$, for generic $S_1$ and $S_2$ obtained by permutations and minhash, is equal to $$\frac{x}{x+y} $$ which is also the Jaccardi similarity of the 2 column sets. This is the meaning of the paragraph in the OP. To show this, the author imagines to move from the top of the column sets and uses the definitions of $X$, $Y$ and $Z$ classes of entries for both $S_1$ and $S_2$. Add on Let us discuss the statement " the probability of having $h(S_1)=h(S_2)$, for generic $S_1$ and $S_2$ obtained by permutations and minhash, is equal to $\frac{x}{x+y} $". In order to compute $h(S_1)$ and $h(S_2)$ for any two column sets $S_1$ and $S_2$, we need to identify the position(=row number) of the first $1$, starting from top of the column vectors for both $S_1$ and $S_2$: this is the definition of the minhash function $h(\cdot)$. To simplify the discussion the author introduces three classes of rows in $S_1$ and $S_2$: $X$, $Y$ and $Z$. The classes containing the 1's are $X$ (the element 1 is contained in the given row both for $S_1$ and $S_2$) and $Y$ (the element 1 appears in the given row in $S_1$ or in $S_2$). Moving from top to down along both $S_1$ and $S_2$ we skip all rows of class $Z$: they contain no 1; we stop the search if we meet a row of class $X$ or class $Y$. We distinguish two cases: case 1: we meet a row of class $X$: as the element 1 appears in both $S_1$ and $S_2$ in the given row, we can compute $h(S_1)$ and $h(S_2)$; they are equal, i.e. $h(S_1)=h(S_2)$ as the row under examination is the same. The question is: which is the probability of meeting an $X$ row before than an $Y$ row while moving from top to bottom along $S_1$ and $S_2$? This probability is $\frac{x}{x+y}$: you should convince yourself of it by doing some explicit example. An idea would be to write down a simple cases, like $S_1 = (1,0,1)$ and $S_2 = (0,1,1)$. For the given permutation (leading to the $S_1$ and $S_2$ given above) $x=1$ and $y=2$. In fact, there exists $1/3\sim 33.33%$ of probability of having the unique row of class $X$, i.e. $(1,1)$ in front of the 2 rows of class $Y$ $(0,1)$ and $(1,0)$. Remember that $S_1$ and $S_2$ are subsets whose representation as vectors with 1-0 is given up to permutation. case 2. we meet a row of class $Y$: as the element 1 appears in either $S_1$ or $S_2$ in the given row, then we can compute either $h(S_1)$ or $h(S_2)$; their values cannot be equal by construction of the class $Y$.
Nonzero Prime Ideals are Maximal in Euclidean Domains
Let $R$ be a Euclidean Domain and let $0 \neq P$ be a prime ideal. Since $R$ is a Euclidean Domain, it is an Principal Ideal Domain, so $P = (p)$ for some $p \in R$, but we have the following result: Lemma: $p$ is prime if and only if $(p)$ is prime. This tells us that $p$ is prime, given that $P = (p)$ is prime. So suppose $P \subseteq M \subseteq (1)$. Since $R$ is Principal Ideal Domain, $M = (y)$ for some $y \in R$, so we have $(p) \subseteq (y) \subseteq (1)$. Since $p \in (y)$, $p = ry$ for some $r \in R$. Since $p$ is prime, either $p \mid r$ or $p \mid y$. If $p \mid r$, then $r = ps$ for some $s \in R$ in which case $p = ry = (ps)y$ which means $1 = sy$, since a Euclidean Domain is an Integral Domain, and we can cancel, hence $y$ is a unit, so $(y) = (1)$. If $p \mid y$, then $y = ps$ for some $s \in R$ and $y \in (p)$ which means $(p) = (y)$. Conclude by definition that $P$ is a maximal ideal.
Usage of open sets in the definition of manifold
Suppose that $V$ is any neighbourhood of $x$ (in $M$) such that there is a homeomorphism $f: V \to O$ where $O \subseteq\Bbb R^n$ is open. Being a neighbourhood of $x$ in $M$ means that there is an open set $V_x$ containing $x$ in $M$ such that $V_x \subseteq V$. But then $h[V_x]$ is open in $O$ (and hence open in $\Bbb R^n$ too!) and the restriction $h: V_x \to h[V_x]$ is a homeomorphism between an open neighbourhood of $x$ and an open subset of $\Bbb R^n$, just as your definition requires. So whether we ask for just a neighbourhood or an open neighbourhood, we get the same result, no extra generality is achieved. But it's nicer (in manifold theory) to work with open neighbourhoods, so we just assume that from the start.
The mirror point of an ellipse's focus about the tangent line through a point is collinear to said point and the other focus
The following proof is cited from the book What Is Mathematics? Consider any line $l$ in the plane $\mathbb{R}^2$. For any two points $F,F'\notin l$ on the same side of $l$, and for any given length $L=|QF|+|QF'|$, one may find the locus of $Q$ to be an ellipse by definition. One can find exactly one point $P\in l$ satisfying $$|PF|+|PF'|=\underset{Q\in l}{\min}(|QF|+|QF'|). $$ Now let $L=\underset{Q\in l}{\min}(|QF|+|QF'|)$, and the ellipse is tangent to $l$. Notice how one may find the point $P$. You just find the reflection point $G$ of $F'$, and the intersection of $FG$ and $l$ is exactly $P$, which can be easily proven by triangle inequality $$|QF|+|QF'|\geqslant|PF|+|PF'|=|FF'|. $$ And this is why the reflection property of ellipse holds.
If $A = u*u^T$ where $0 \neq u \in \mathbb{R}^n$ , then find the eigenvalues of $A$ and show that $A$ is diagonalizable.
Hints: $u$ is an eigenvector of $A$, with which eigenvalue? $A$ has rank $1$, so what is the nullity?
Residue for 1/t^(1/2)
An idea: substitute $y^2=t\implies 2ydy=dt\;$ , and your integral becomes $$2\int_0^\infty\frac{\log^2(y^2-\pi^2)}{(1+y^4)}\,\,y\,dy$$ Clearer now?
Find the limit of $\int_{0}^{\pi }\frac{\sin nx}{nx} \mathrm dx $.
We have that $$\int_{0}^{\pi }\frac{\sin nx}{nx} \mathrm dx=\int_{0}^{n\pi }\frac{\sin nx}{nx} \mathrm {1\over n}dnx={1\over n }\int_{0}^{n\pi} {\sin x\over x}dx$$also we know that $$\int_0^{\infty}{\sin x\over x}dx={\pi \over 2}$$according to Integration of Sinc function therefore $$\lim _{n\to \infty}\int_{0}^{\pi }\frac{\sin nx}{nx} \mathrm dx=0$$
Is there a continuous and surjective mapping from a simple connected space to the topologist's sine curve?
No. It's not possible to get a continuous surjection from your $M$ to $T$, since $M$ is path-connected and $T$ is not. More generally, any space with a continuous surjection to $T$ will have to not be path-connected. Since $T$ is one of the most basic examples of a connected space that is not path-connected, I highly doubt there is any connected space with a continuous surjection to $T$ which you would consider "simpler".
a conjectured new generating function of narayana's sequence
The ordinary generating function for your recurrence is $$ g(x) = \dfrac{1+x^2}{1-x-x^3}$$ Thus $$\sum_{n=0}^\infty (-1)^n a_n q^n = g(-q) = \frac{1+q^2}{1+q+q^3}\tag1$$ If that is $\phi(q)$, then indeed $$ \phi(1/q) = \dfrac{1/q^2+1}{1/q^3 + 1/q + 1} = \dfrac{q (1 + q^2)}{1+q^2 + q^3} = q \phi(q)$$ Now let's try to get your continued fraction. $$ \phi(q) = \dfrac{1}{1+q - \phi_1(q)}$$ where $$ \phi_1(q) = \dfrac{q^2}{q^2+1}$$ $$\phi_1(q) = \dfrac{q^2}{1 + q^3 + \phi_2(q)}$$ where $$ \phi_2(q) = q^2 - q^3$$ $$ \phi_2(q) = \dfrac{q^2 (1-q)(1-q^3)}{1+q^5 - \phi_3(q)}$$ where $$ \phi_3(q) = q^3 + q^5$$ $$\phi_3(q) = \dfrac{q^2(1+q^2)(1+q^4)}{1+q^7 + \phi_4(q)}$$ where $$ \phi_4(q) = q^4 - q^7$$ Hmm. Looks like a pattern developing, and should be possible to prove it by induction.