title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Change of (orthonormal) basis.
The idea is this - there is a correspondence between matrices in $\mathbb{F}^{n\times n}$ and linear operators in $\mathcal{L}(V,V)$. To every linear operator $T\in \mathcal{L}(V,V)$ one can associate a matrix as follows : Pick a basis $\mathcal{B} = \{e_i\}$ of $V$, and consider the matrix $T_{\mathcal{B}}$ whose columns are the vectors $\{T(e_i)\}$. Clearly, this depends on the choice of basis, so the question is : if we choose a different basis $\mathcal{B}'$, then how are the two matrices $T_{\mathcal{B}}$ and $T_{\mathcal{B}'}$ related? The answer is : There is an invertible matrix $P$ such that $$ T_{\mathcal{B}'} = PT_{\mathcal{B}}P^{-1} $$
Find y position and width of ellipse
Solve in terms of $U$ $$\frac{a^2}{w^2}+(\frac{p_{0_y}-y}{y-v})^2= U;\;$$ $$\frac{b^2}{w^2}+(\frac{p_{1_y}-y}{y-v})^2=U\;$$ To solve for two variables you need two equations. You gave one.
Find period of the following function
$$f(x)=| \sin(x) + \cos(x) | =\sqrt{2}\Big|\sin(x+{\pi\over 4})\Big|$$ Now $$f(x+\pi) = \sqrt{2}\Big|\sin(x+{\pi\over 4}+\pi)\Big| = \sqrt{2}\Big|\sin(\pi-(x+{\pi\over 4}+\pi))\Big|$$ $$=\sqrt{2}\Big|\sin(-x-{\pi\over 4})\Big|= \sqrt{2}\Big|-\sin(x+{\pi\over 4})\Big|=f(x)$$ So the peroid of $f$ is $\pi$.
Let $\sum a_n$ and $\sum b_n$ converge, $a_n,b_n\geq 0$, does $\sum \min\{a_n,b_n\}$, $\sum \max\{a_n,b_n\}$ converge too?
Since I don't know what “will pick either value of one of the partial sums” means, I can't tell whether you are right or wrong. But you can do it as follows: since both series $\sum_{n=0}^\infty a_n$ and $\sum_{n=0}^\infty b_n$ converge, the series $\sum_{n=0}^\infty(a_n+b_n)$ converges too. And since$$(\forall n\in\mathbb Z_+):\min\{a_n,b_n\},\max\{a_n,b_n\}\leqslant a_n+b_n,$$both series $\sum_{n=0}^\infty\min\{a_n,b_n\}$ and $\sum_{n=0}^\infty\max\{a_n,b_n\}$ converge, by the comparison test.
Question about queues
1) When the two pointers (indices) head[Q], end[Q] are equal that means the queue is empty (which is what we want initially) and when head[Q]=end[Q]=1 that means they both point to the first element of our array to start the queue. 2) When head[Q] -1 = end[Q], the queue is full and there is no other place to put an element there. head[Q] points to the first element of the queue and end[Q] points to the first empty place that a new element can go there. Note that, the pointer end[Q] is cyclic, i.e. after Q[n] was used, it starts from the first position Q[1] of the array.
How can one show that $f(0)=0$ for $f$ satisfying certain conditions?
I think that this will do the trick for showing that f(0) = 0: Let's start by setting $x = 0$. We get $$f(f(y)) = f(0) + (1+f(0))y.$$ By setting $y = 0$, we get $$f(x) = f(x + (1+x)f(0)).$$ Apply now $f$ to expression above and use the first equation: $$f(0) + (1+ f(0))x = f(0) + (1+f(0))(x + (1+x)f(0)).$$ This simplifies to $$(1+x)(1+f(0))f(0) = 0$$ and thus $f(0) = 0$. We can also conclude that $f(f(y)) = y $, so f is its' own inverse. Let suppose that there is numbers $a,b$ such that $a < b$ and $f(a) = b$ and $f(b) = a$ ($f$ is its' own inverse). Since function $$ g(x) = \frac{f(x)}{x}$$ is increasing by the hypothesis. This means that $g(a) \le g(b)$, so $$\frac{f(a)}{a} \le \frac{f(b)}{b} \iff \frac{b}{a} \le \frac{a}{b} \iff$$ $$\frac{b^2 - a^2}{ab} \le 0 \iff \frac{(b-a)(b+a)}{ab} \le 0 \iff $$ $$ \frac{(b+a)}{ab} \le 0. $$ so we can say that there can't be such $a,b$ with same sign. So $x$ and $f(x)$ must have different signs. As OP noticed the equation $$ f(x+(1+x)f(x)) = x + (1+x)f(x) $$ holds. Let denote $x+ (1+x)f(x)$ as $t$. Then the equation says $f(t) = t$ but $f(t)$ and $t$ have different signs, therefore, $t$ must be zero. Thus, we conclude that $x + (1+x)f(x) = 0$ and the function $f$ is: $$ f(x) = -\frac{x}{1+x}.$$
DAG proof by numbering nodes
Hint: We need to show two things: (If part) If the graph has a topological sorting, then it is acyclic. Equivalently, it if the graph has a cycle (graph is not acyclic) then it does not have a topological sorting. How to show that? Let $G$ be a graph with a cycle $u_{1}, u_{2}, \dots, u_{1}$. Assume for the sake of contradiction that $G$ has a topological sorting, i.e., a sorting/labeling of the vertices with the properties that you mentioned. The vertices of the cycle in $G$ are also labeled. Show that they violate the properties of the topological sorting. (Only if part) The graph is acyclic only if it has a topological sorting. Equivalently, if the graph is acyclic, then it has a topological sorting. Maybe just show how to make one? If a graph is acyclic, then it has some vertex $v$ that does not have any incoming edges...
How to easily find coefficents of a set of matricies to find if the set spans a linear space?
Two things to see when you are dealing with problems like whether a given set of vectors is a basis or not: The vectors should be linearly independent, i.e. let $A_{1},A_{2},A_{3},A_{4}$ represent the vectors given, $a_1 A_1 + a_2 A_2 +a_3 A_3 +a_4 A_4 = 0 \implies a_1 = a_2 = a_3 = a_4 = 0$ Given any $A \in \mathbb{R}^{2x2}$ (the given space), $A$ can be written uniquely in terms of the given vectors. Part 1 can be easily shown and has been shown in your image. The only set of values of $c_1, c_2, c_3, c_4$ that will satisfy $c_1 A_1 + c_2 A_2 + c_3 A_3 + c_4 A_4 = 0_{2x2}$ is $c_1=c_2=c_3=c_4 = 0$. For part 2 let $A = [b_{11}, b_{12} ; b_{21}, b_{22}]$ can be written as a linear combination of the given vectors $A_{1},A_{2},A_{3},A_{4}$. $$A = c_1 A_1 + c_2 A_2 +c_3 A_3 +c_4 A_4$$ Now you have 4 equations and 4 unknowns $c_1, c_2, c_3 ,c_4$. Solve! Cheers!
Solving third degree polynomial $x^3+2x^2+6x+5=0$
We have $$ x^3+2x^2+6x+5=(x^2+x+5)(x+1), $$ where we can take the alternating sum $-1+2-6+5=0$ to see the linear factor.
Determine if a 4-tuple exists
[Lemma] The sequence will come back to the original tuple (in this case, 2003). (Proof) Choosing the next number is always deterministic. At the same time, identifying the previous number is also deterministic. As there are finite set of 4 digit numbers, at some point the sequence will run out. And then it will have to cycle. The initial number should be inside that cycle, as otherwise it would mean that the cycle has a branch (or a junction) that is not possible as the backward sequence is also deterministic. $$\\$$ So, 2003 will show up 2nd time (and actually over and over again). For 2003 to show up the below sequence should happen: $$0-4-0-7-1-2-0-0-3$$ Therefore, 0407 will show up.
If $U,W$ are subspaces of vector space $V$, show that $U\cap W$ is a subspace.
$U$ is closed under addition, so any sum of vectors in $U$ is a vector in $U$
quotient of a hyperplane by the action of cyclic group
First note that $S^3 = \{(x,y,-y-x)\mid |x|^2+|y|^2 + |x+y|^2 = 1\}$. Since the action of $G=\mathbb{Z}_3$ permutes the coordinates, it's clear that $G$ preserves $S^3$. Now, suppose $e\neq g\in G$ and that $g(x,y,-y-x) = (x,y,-y-x)$. Then in particular, we also have $g^2(x,y,-y-x) = (x,y,-y-x)$. It follows that we must have $x = y = -y-x$ and from this it follows that the fixed point was $(0,0,0)$, which is not an element of $S^3$. Note that this proof doesn't use the fact that we're looking at the unit sphere, only that the sphere has nonzero radius. Next, notice that $H \cong C(S^3) = S^3\times[0,\infty)/$~ where ~ collapses all of $S^3\times\{0\}$ to a point. The map establishing this homemorphism can be defined as follows: Every non $0$ point $q$ in $H$ determines a unique ray emanating from $0$. This ray will pierce the sphere $S^3$ in precisely one point $f(q)$. Now, map $H$ to $C(S^3)$ by sending $q$ to $(f(q), |q|^2)$ if $q\neq 0$ and sending $0$ to $S^3\times\{0\}$. I leave it to you to prove this is a homeomorphism. Further, this homemorphism is $G$ equivariant if $G$ acts on $C(S^3)$ by simply copying the $G$ action on each $S^3$. That is, $g(p, t) = (gp, t)$. (Again, I leave this to you to prove). This should easily allow you to construct a homeomorphism from $H/G \cong C(S^3)/G$ to $C(S^3/G)$.
Computing Limits of Multivariable Functions
Let $f(x,y) = \frac{y}{1+xy} - \frac{1-ysin(\frac{\pi x}{y})}{\arctan(x)}$, let's modify our function $$\frac{y}{1+xy} - \frac{1-ysin(\frac{\pi x}{y})}{\arctan(x)} = \frac{1}{x}\frac{1}{\frac{1}{xy}+1} - \frac{1-ysin(\frac{\pi x}{y})}{\arctan(x)},$$ now let $z = \frac{1}{y}$, hence $$\lim_{y\rightarrow \infty} f(x,y) = \lim_{z\rightarrow 0} f(x,\frac{1}{z})=\lim_{z\rightarrow 0}\frac{1}{x}\frac{1}{\frac{z}{x}+1} - \frac{1-\frac{1}{z}sin(z\pi x)}{\arctan(x)},$$ clearly $\frac{1}{x}\frac{1}{\frac{z}{x}+1} \rightarrow \frac{1}{x},$ let's have a closer look at $\frac{1-\frac{1}{z}sin(z\pi x)}{\arctan(x)}$. Using Taylor series we can write $$\frac{1}{z}sin(z\pi x) = \frac{1}{z}\Big(z\pi x + \mathcal{O}(z^3)\Big)=\pi x + \mathcal{O}(z^2),$$ Can you go from here?
Complex analysis of $f(z) = 2 y^2 \sin x − i y^3 \cos x$
Let's check when the Cauchy-Riemann equations hold: $$u=2y^2\sin x, v=-y^3\cos x$$ $$u_x=2y^2\cos x, v_y=-3y^2\cos x$$ $$u_y=4y\sin x, -v_x=-y^3\sin x$$ A necessary condition for being differentiable is for the CR equations to hold - $u_x=v_y,u_y=-v_x$. From this, you get a strong restriction on the possible points where $f$ is differentiable (certainly not everywhere). Moreover, it's easy to see that $f$ cannot be differentiable on any open set, so it's nowhere analytic.
$\lim_{t\to \infty}x(t)$:convergent $\Rightarrow$ $\lim_{t\to \infty}x'(t)=0$
No, take $$f(t)=\frac{\sin(t^2)}{t}$$ Then $$f'(t) = 2 \cos(t^2)-\frac{\sin(t^2)}{t^2}$$ And it doesn't converge to 0.
Bijection between closed unit interval and $ R$
Yes, because they both have the cardinality of the continuum.
Bombing of Königsberg problem
Color the vertices with an odd number of edges white, and the other vertices black. We consider sets of paths, each path from a white vertex to another white vertex. Such a set of paths with a minimal number of edges, that still connects to every white vertex, constitutes a minimum number of bridges to destroy that leaves an Eulerian cycle. If instead we consider such a set of paths with a minimal number of edges, that connects to every white vertex but two, that will answer the original question, giving the minimal number of bridges to destroy that leaves an Eulerian path.
Convergence of the sequence $a_n=\int_0^1{nx^{n-1}\over 1+x}dx$
We have $$ a_n=\int_0^1\frac{nx^{n-1}}{1+x}\,dx=\frac{x^n}{1+x}\Big|_0^1+\int_0^1\frac{x^n}{(1+x)^2}\,dx=\frac12+\int_0^1\frac{x^n}{(1+x)^2}\,dx \quad \forall n \ge 1. $$ Since $$ \int_0^1\frac{x^n}{(1+x)^2}\,dx\le \int_0^1x^n\,dx=\frac{1}{n+1} \quad \forall n\ge 1, $$ it follows that $$ \lim_n\int_0^1\frac{x^n}{(1+x)^2}\,dx=0. $$ Thus $\lim_na_n=\frac12$.
Function notation terminology
I can remember to read this text, and being puzzled with the exact same question. From what I've learned from my teacher, you're right, writing down something as "the function $f(x)$..." is sloppy notation. However, many books/people will use it this way. If you're are very precise, $f(x)$ is not a function or an map. I don't know of a standard way to refer to $f(x)$, but here is some usage I found on the internet: The output of a function $f$ corresponding to an input $x$ is denoted by $f(x)$. Some would call "$f(x)=x^2$" the rule of the function $f$. For each argument $x$, the corresponding unique $y$ in the codomain is called the function value at $x$ or the image of $x$ under $f$. It is written as $f(x)$. If there is some relation specifying $f(x)$ in terms of $x$, then $f(x)$ is known as a dependent variable (and $x$ is an independent variable). A correct way to notate your function $f$ is: $$f:\Bbb{R}\to\Bbb{R}:x\mapsto f(x)=x^2$$ Note that $f(x)\in\Bbb{R}$ and $f\not\in\Bbb{R}$. But the function $f$ is an element of the set of continuous functions, and $f(x)$ isn't. In some areas of math it is very important to notate a function/map specifying it's domain, codomain and function rule. However in for example calculus/physics, you'll see that many times only the function rule $f(x)$ is specified, as the reader is supposed to understand domain/codmain from the context. You can also check those questions: In written mathematics, is $f(x)$a function or a number? What is the difference between writing f and f(x)?
measure theory convergence theorems for non-discrete indexing parameter
To elaborate on Jonas Teuwen's comment, recall the following fact about real functions: $f(t) \to b$ as $t \to a$ if and only if, for every sequence $t_n$ with $t_n \to a$, we have $f(t_n) \to b$ as $n \to \infty$. So let $t_n$ be an arbitrary sequence with $t_n \to 0$. Show that $\int |f(x + t_n) - f(x)|dx \to 0$ as $n \to \infty$ (which is probably the same proof as your special case $t_n = 1/n$). Then by the above fact, you are done.
$U$ invertible and $U^{\ast}U=1$ then $UU^{\ast}=1$ (Barry Simon)
Because you don't mention it, you might be overlooking something in your argument for $(a)$. You don't say how you get from $$\tag1\langle \varphi, U^*U\varphi\rangle=\langle \varphi,\varphi\rangle$$ to $U^*U\varphi=\varphi.$ This is not hard, but it is not entirely immediate. You would usually use the polarization identity to get from $\langle \varphi,(U^*U-I)\varphi\rangle=0$ (for every $\varphi$) to $U^*U=I$. And the argument does not work without quantifiers: you cannot conclude, if you have $(1)$ for a single $\varphi$, that $U^*U\varphi=\varphi$. You need to have $(1)$ for all $\varphi\in H$. For instance, with $H=K=\mathbb C^2$, let $\varphi=\begin{bmatrix} 1\\0\end{bmatrix}$, and $T=\begin{bmatrix} 1&0\\-1&0\end{bmatrix}$. Then $$ \langle T\varphi,\varphi\rangle=\left\langle\begin{bmatrix} 1\\-1\end{bmatrix},\begin{bmatrix} 1\\0\end{bmatrix}\right\rangle=1=\langle\varphi,\varphi\rangle, $$ but $T\varphi\ne\varphi$. For part (b), if $U^*U=I$ and $U$ is invertible, then multiplying on the right by $U^{-1}$ you get $U^*=U^{-1}$. Now you have $I=UU^{-1}=UU^*$.
Using Legendre polynomial to approximate any polynomial
It's not a matter of approximation - any polynomial can be exactly decomposed into a linear combination of Legendre polynomials. One way to see this is that $P_n$ has degree $n$. The highest order term can be seen to be equal to $\frac{(2n)!}{2^nn!^2}x^n$ (by looking at Rodrigues' formula). The decomposition of a polynomial $F(x) = \sum_{j=0}^n a_jx^j$ of degree $n$ thus can be started with a term $G_n(x) = a_n\frac{2^nn!^2}{(2n)!}P_n(x)$. Next, the remaining highest term in $F(x)-G_n(x)$ is considered, and the appropriate $P_m$ is used (often this is $P_{n-1}$ because a reamining term with degree $n-1$ has to be tackled). And so forth, until $P_0 = 1$ is used to correct for the remaining constant term. Another, more elegant way, is to recognize that $(f,g) = \int_{-1}^1 f(x)g(x)dx$ is a proper inproduct on the universe of squared integrable functions on $[-1, +1]$, and that all $P_n$ are mutually orthogonal, that is, $\int_{-1}^1 P_nP_mdx = 0$ if $n \not=m$. This implies that $F$ can be uniquely decomposed as $$F= \sum_{j=0}^n \frac{(F, P_j)}{(P_j, P_j)}P_j$$
Rate of convergence of an iterative root finding method similar to Newton-Raphson
What a strange method. Unlike Newton-Raphson, it uses the same slope at every step; the slope of the secant through points $\{a,b\}$. Sure, convergence is not guaranteed. Take, for example, $\sqrt{x}$ on $[a,b]=[0,4]$. The slope $m=(f(b)-f(a))/(b-a)$ is $1/2$, so the algorithm proceeds by moving from initial point along the line with this slope until it intersects $y=0$; then repeat. This is problematic because for any starting point other than $4$, the intersection happens in the negative territory. Even if the function was defined there, who knows what it would do. To estimate the speed of convergence, suppose $x_n\to x^*$ where $f(x^*)=0$. Generically, $f'(x^*)$ will not be equal to $m$. Therefore, the prediction $x_{n+1}$ will miss $x^*$ by an amount comparable to $|x_n-x^*|$; this is linear convergence rate.
Implicit function theorem and derivative
Given that for all $x\in I$ we have $F(x,\varphi(x))=0$, we differentiate with respect to $x$: $$0 = \frac{d}{dx}F(x,\varphi(x))=F_{x}(x,\varphi(x)) + F_{y}(x,\varphi(x))\varphi'(x).$$ Solving for $\varphi'(x)$ gives us the desired result: $$\varphi'(x) = \frac{-F_{x}(x,\varphi(x))}{F_{y}(x,\varphi(x))}.$$
Show that for every m and n value, $\int_0^1 x^m (1-x)^n \,dx= \int_0^1 x^n (1-x)^m \,dx$
$$\int_0^1 x^m (1-x)^n \,dx= \int_0^1 x^n (1-x)^m \,dx$$ Your answer is in the image below: Finally, replace $ t$ by $x$ to get the required answer.
Two orbits, yet only one unit representation contained?
Because $$ \begin{bmatrix}0&1&0\\1&0&0\\0&0&1\end{bmatrix} $$ is similar to $$ \begin{bmatrix}1&0&0\\0&-1&0\\0&0&1\end{bmatrix}. $$ This is because $$ \begin{bmatrix}1&-1\\1&1\end{bmatrix}^{-1} \begin{bmatrix}0&1\\1&0\end{bmatrix} \begin{bmatrix}1&-1\\1&1\end{bmatrix} = \begin{bmatrix}1&0\\0&-1\end{bmatrix}. $$ You see you failed to see the other, non-obvious trivial representation: possibly the best way to see these kinds of things is via characters. $\rho_{s}$ has character $\chi_{s}$ with value $3$ on $1$ and $1$ on $g = (12)$. Take the scalar product with the trivial character $\tau$ to get that $\tau$ has multiplicity $$ \frac{1}{2} ( 1 \cdot 3 + 1 \cdot 1) = 2 $$ in $\rho_{s}$.
Find all the numbers and aware
There is one very nice thing we can use. It is not a coincidence that 0 and 1 were the only numbers that worked for both modulo 8 and 125. We can prove that quite easily, actually. Let $p^k$ be a prime power. Then: $$ n^2=n\pmod{p^k} \iff p^k\mid n^2-n = n(n-1) \iff p^k \mid n \lor p^k \mid n-1 $$ $$ \iff n\in\{0,1\} \pmod{p^k} $$ (Do you see why the second "if and only if" is valid?) With this, we can easily solve any problem asking $n^2=n \pmod m$. Just find the prime factorisation $m=p_1^{\alpha_1}\cdots p_k^{\alpha_k}$. Then $n=0$ or $n=1$ modulo each of the prime powers, giving $2^k$ solutions, which we can find with the Chinese Remainder Theorem.
Use central limit theorem to find variance
It seems to me that $X_i$ has a geometric districution, which has variance $\frac{1-p}{p^2}=2$. For independent $X_i$, the sum has variance $2\times 100$ and $\sum_i X_i/100$ has variance $\frac{200}{100^2}=0.02$.
From compatible Riemannian metric to Hermitian metric
Extending $g$ sesquilinearly you get another metric, say $\tilde{g}$, on $TM\otimes \mathbb{C}$ but they are related by $$\left. \tilde{g}\right|_{T^{1,0}M} = \frac{1}{2}h$$ See Lemma 1.2.17 in page 30 of Huybrechts' book
Limits - What to do with the numerator?
You want to approach $-1$ from the negative side. So consider $-1-d$ where $d$ approaches zero from the positive side. As $x \to -1^-$:   Let $d = -1 - x$, and hence $x = -1 - d$.   Then $d \to 0^+$.   $\frac{x}{-x^2+2+x} = \frac{-1-d}{-(-1-d)^2+2+(-1-d)} = \frac{-1-d}{-3d-d^2} \to\ ?$. [You should be able to figure out now.]
Expected value and the standard simple regression model
\begin{align} \text{E} \left ( \frac{\sum_{i=1}^{n} x_i Y_i}{\sum_{j=1}^{n} x_j^2} \right ) &= \left ( \sum_{j=1}^{n} x_j^2 \right )^{-1} \sum_{i=1}^{n} x_i \text{E}(Y_i) \\ &= \left ( \sum_{j=1}^{n} x_j^2 \right )^{-1} \sum_{i=1}^{n} x_i (\beta_0 + \beta_1 x_i) \\ &= \beta_0 n\bar{x} \left ( \sum_{j=1}^{n} x_j^2 \right )^{-1} + \beta_1 . \end{align}
The formula for pitch circle diameter.
Consider a regular $n$-gon centered at the origin and with side length $2r$. Then the vertices are the centers of the $n$ small circles, and the distance $\rho$ from the center to a vertex is given by the relationship $\sin \frac{\pi}{n} = \frac{r}{\rho}$. Hence $$R = \rho + r = r\left( 1 + \csc \frac{\pi}{n} \right).$$
How do you prove the set $G = \{(x, f(x)) \mid x \in \mathbb R\}$ is closed?
Almost, but better to start the argument with a general convergent sequence $(x_n,y_n)\rightarrow (x,y)$ and then write $y_n$ as $f(x_n)$ to show $y_n$ converges to $f(x)$. Then conclude $(x_n,y_n)$ converges to $(x,f(x))$ which is in $G$ and therefore an arbitrary sequence in $G$ that converges, converges in $G$. Therefore $G$ is closed.
Finding the area of intersection and area of union between two sets of overlapping rectangles
One possible approach is to decompose each set into a non-overlapping set of axis-aligned rectangles (henceforth boxes), where each edge is shared by at most two rectangles. Let's say we have a set of two partially overlapping boxes. You split them into non-overlapping sub-boxes (cells), using a grid where each cell is rectangular, but each column of cells has their own width, and each row of cells has their own height: Here, $$\begin{array}{lll} L_1 = x_{min,a} & C_1 = x_{min,b} & R_1 = x_{max,a} \\ L_2 = x_{min,a} & C_2 = x_{min,b} & R_2 = x_{max,b} \\ L_3 = x_{min,b} & C_3 = x_{max,a} & R_3 = x_{max,b} \\ \; & C_4 = x_{max_a} & \; \end{array}$$ $$\begin{array}{lll} T_1 = y_{min,a} & M_1 = y_{min,b} & B_1 = y_{max,a} \\ T_2 = y_{min,a} & M_2 = y_{min,b} & B_2 = y_{max,b} \\ T_3 = y_{min,b} & M_3 = y_{max,a} & B_3 = y_{max,b} \\ \; & M_4 = y_{max,a} & \; \end{array}$$ Each cell in the grid has width, height, and a color: unused, set 1, set 2, or intersection. Each vertical cell edge has three pieces of information: its $x$ coordinate, its $height$, and the name of the variable its $x$ coordinate depends on (so that edges defined by a specific variable can be found). Similarly, each horizontal cell edge has an $y$ coordinate, $width$, and the name of the variable its $y$ coordinate depends on. With the above information, it is trivial to compute the areas. Implement a procedure that sums (width×height) of each cell based on color. Then, the union is the sum of set 1, set 2, and intersect areas; and of course the area of the intersection is just the intersect sum. More importantly, you can just as easily compute the gradient (the partial derivatives of the area function with respect to each original variable), split by color pairs: For each horizontal edge of width $w$, examine the color of the cell above, and the color of the cell below. This edge affects the derivative of union with respect to the variable $z$ related to the edge by $dU/dz$, and/or of intersection by $dO/dz$: unused, set1: $dU/dz = -w$ unused, set2: $dU/dz = -w$ unused, intersection: $dU/dz = -w, \; dO/dz = -w$ set1, intersection: $dO/dz = -w$ set2, intersection: $dO/dz = -w$ intersection, set1: $dO/dz = +w$ intersection, set2: $dO/dz = +w$ intersection, unused: $dU/dz = +w, \; dO/dz = +w$ set1, unused: $dU/dz = +w$ set2, unused: $dU/dz = +w$ Similarly for the vertical edges. Some background math to explain this might be in order. The area of the union $U(x_1, y_1, ..., x_N, y_N)$ is piecewise linear with respect to each coordinate. That is, when one single coordinate (edge of an axis-aligned rectangle) changes by a differential amount $dx_i$, the area of the union changes by amount $(h_{R,i} - h_{L,i})dx_i$, where $h_{R,i}$ is the total length of outside edges defined by this variable on the right side, and $h_{L,i}$ the total length of outside edges defined by this variable on the left side. (This is because increment of the variable on the right edge increments the area, but on the left edge decrements the area.) This is easiest to understand by looking at a single axis-aligned rectangle, defined by $x_L \le x_R$ and $y_B \le y_T$, where $(x_L , y_B) - (x_R , y_T)$ and $(x_L , y_T) - (x_R , y_B)$ are the two diagonals of the rectangle. Then, $$A(x_L , y_B , x_R , y_T ) = ( x_R - x_L ) ( y_T - y_B )$$ and the partial derivatives (defining $\nabla A$) are $$\frac{d\,A}{d\,x_L} = - ( y_T - y_B )$$ $$\frac{d\,A}{d\,y_B} = - ( x_R - x_L )$$ $$\frac{d\,A}{d\,x_R} = ( y_T - y_B )$$ $$\frac{d\,A}{d\,y_T} = ( x_R - x_L )$$ When we decompose the set of overlapping boxes to non-overlapping boxes in a grid, the above applies to each box, if and only if the other box sharing the same edge is of a different type: if it were of the same type, moving the single edge between the two boxes by an infinitesimal amount would not have any effect on their total area at all. This is not a particularly hard programming problem, because there are many different ways to implement the data structures needed to solve it. (Although it does mean that finding a near-optimal solution is hard, simply because there are so many different ways of implementing this, and only actual practical testing would show which ones are most effective.) I would suggest implementing it and testing it separately first, perhaps having the test program output an SVG image of the resulting decomposed non-overlapping set, with outside horizontal edge widths and vertical edge heights and the variable names their coordinates depend on written on the outside of the image, for area and gradient verification by hard. If this approach feels intractable, it should be noted that for $N$ rectangles ($4N$ independent variables), you could simply calculate the derivatives using $$\frac{d\,A(c_1, c_2, ..., c_{4N})}{d\,c_i} = \frac{A(..., c_i + \Delta_i, ...) - A(..., c_i - \Delta_i, ...)}{\Delta_i}$$ (involving $8N$ area calculations, so linear complexity with respect to $N$), where $\Delta_i$ is a small perturbation in the coordinate $c_i$. In a very real sense, if $\Delta_i$ approximates the typical change in one iteration step (if done along variable $c_i$), this should give more realistic feedback! You see, since the area functions are piecewise linear, and there are up to $2N-1$ pieces along each axis, the derivative, or infinitesimal or instantaneous change along that axis, may not be truly informative! For example, consider a case where the two sets have a long thin hole somewhere. Because the hole has a long edge, the derivative with respect to the variables defining the long edges of the holes are large, and that makes those variables "more important" when looking at the derivatives only. In reality, the area of the hole may be minuscule compared to the totality, making those variables not at all important in reality. If $\Delta_i$ for the variables is large enough to span the hole, the estimate obtained with it better reflect the actual change in area if the hole were covered.
Inverse and composite functions.
They mean that it becomes a function from $X$ to $X$ (for the second identity function), or from $Y$ to $Y$ (for the first identity function). For example, for the second one: $(f^{-1} \circ f) (x) = x$ for any $x \in X$, hence it maps any $x \in X$ to $x$ itself. Hence it is the identity function from $X$ to $X$, which they say is a function 'on' $X$ itself: any function from $X$ to $X$ itself is a function 'on' $X$, whether it is an identity function or not.
What is the breakpoint of a piecewise function?
"Break points" are where the graph "breaks"- where the graph is no longer continuous or is not "smooth". Basically that means where the "formula" for the function changes so that you need a "piecewise" definition.
Calculate convergence of random variable sum
Your second part will be zero $u_n=\frac{1}{n}$ converges towards zero According to Cesaro : $U_n=\frac{\sum_{k=1}^{n}u_k}{n}$ converges towards zero as well Since $U_n$ is greater than your second term, it converges towards zero.
Number of directed graphs without isolated vertices
For the case of $n$ labeled nodes we find using PIE the closed form $$D_n= \sum_{p=0}^n {n\choose p} (-1)^p 4^{n-p\choose 2}.$$ This will produce $$0, 3, 54, 3861, 1028700, 1067510583, 4390552197234, \ldots$$ which points to OEIS A054545 where it appears we have a match. For the unlabeled case observe that the number $F_n$ of non-isomorphic digraphs was computed at the following MSE link. We then obtain for the number of digraphs with no isolated nodes $$D_n = F_n - F_{n-1}$$ which will produce $$0, 2, 13, 202, 9390, 1531336, 880492496, 1792477159408,\ldots$$ again a match, this time with OEIS A053598. More data concerning PIE. The nodes of the poset for use with PIE correspond to all subsets $P$ of vertices of the $n$ vertices and represent labeled digraphs where the vertices in $P$, plus possibly some other vertices, are isolated. The weight on the the digraphs specified by $P$ is $(-1)^{|P|}.$ This means the digraphs with no isolated vertices have weight one because they appear only in the node that corresponds to $P=\emptyset.$ Digraphs with a set $Q$ of isolated vertices where $|Q|\ge 1$ appear in all nodes $P\subseteq Q,$ for a total weight of $$\sum_{P\subseteq Q} (-1)^{|P|} = \sum_{p=0}^{|Q|} {|Q|\choose p} (-1)^p = 0,$$ i.e. zero. The cardinality of the set of diagraphs corresponding to node $P$ is $4^{n-|P|\choose 2}.$ We now sum the weights over all digraphs, collecting the contributions from all nodes where they appear. We already know that this will assign weight one to those with no isolated vertices and zero otherwise, providing the desired count. There are ${n\choose p}$ sets $P$ of $p=|P|$ nodes and there are $4^{n-p\choose 2}$ digraphs at these nodes with weight $(-1)^p$, concluding the derivation of the formula using PIE. Note that the count is the same regardless of whether we iterate over all digraphs, collecting the weights from the nodes or over all nodes, collecting the weights of all digraphs.
Find the smallest possible checksum
Let's start with a brief explanation of "casting out nines" as it's needed for the problem at hand. If $ABCD$ is the (at most) four-digit sum of a pair of three-digit numbers, $abc+def$, then, since $1000\equiv100\equiv10\equiv1$ mod $9$, we have $$\begin{align} A+B+C+D &\equiv(1000A+100B+10C+D)\\ &=(100a+10b+c)+(100d+10e+f)\\ &\equiv a+b+c+d+e+f\mod 9 \end{align}$$ so, if $\{a,b,c,d,e,f\}=\{2,3,4,5,7,9\}$, we have $$A+B+C+D\equiv2+3+4+5+7+9=(2+7)+3+(4+5)+9\equiv3\mod 9$$ which means that the digit sum for any pair $abc+def$ belongs to $\{3,12,21,\ldots\}$. The OP has already found one example with digit sum $3$, namely $243+957=1200$, so we can conclude that the smallest possible digit sum is $3$. The remaining question, how many different pairs give digit sum $A+B+C+D=3$? Note that $$abc+def=dbc+aef=aec+dbf=dec+abf$$ so it suffices to count the number of solutions with $a\lt d$, $b\lt e$ and $c\lt f$ and then multiply by $4$ (or by $8$, if you want to distinguish $(abc,def)$ from $(def,abc)$). Now the only realistic possibilities for $ABCD$ are $1200$, $1020$, $1002$, $1110$, $1101$, and $1011$. (That is, $500\lt abc+def\lt2000$, so we must have $A=1$ since $A=0$ would imply $B\ge5$.) Let's consider these possibilities according to what's in the ones place. If $D=0$ (i.e., if $ABCD=1200$, $1020$, or $1110$), we can only have $c=3$ and $f=7$, which will carry a $1$ into the tens place. That makes $C=1$ impossible, but it gives $1+2+9=11$ and $1+4+5=10$ as possibilities for $C=2$ and $C=0$. Indeed, we get x two solutions (with $a\lt d$, etc.), namely $423+597=1020$ and $243+957=1200$. If $D=1$ (i.e., if $ABCD=1101$ or $1011$), we can have $c+f=2+9=11$ or $c+f=4+7=11$. In either case the carried $1$ from $c+f$ means we need $b+d=9$ or $10$ in order to get $C=0$ or $1$. For $c+f=2+9$, both values of $C$ are attainable, each in only one way: $342+759=1101$ and $432+579=1011$. For $c+f=4+7$, neither value of $C$ is attainable, since the only digits that sum to $9$ are $2+7$ and $4+5$ and, as already remarked, the only digits that sum to $10$ is $3+7$. Thus we get just two solutions with $D=1$, namely $342+759=1101$ and $432+579=1011$. Finally, if $D=2$ (i.e., $ABCD=1002$), we can only have $c+f=3+9$ or $c+f=5+7$. In either case we'll need $b+e=9$, which is possible only as $2+7$ or $4+5$. If $c+f=3+9$, either of these is possible, while neither is possible if $c+f=5+7$. So again we get just two solutions, namely $243+759=1002$ and $423+579=1002$. Altogether we get six solutions with $a\lt d$, $b\lt e$ and $c\lt f$: $$\begin{align} 423+579&=1020\\ 243+957&=1200\\ 342+759&=1101\\ 432+579&=1011\\ 243+759&=1002\\ 423+579&=1002 \end{align}$$ The total count, without the restrictions $a\lt d$ and $b\lt e$, is thus $24$; removing the restriction $c\lt f$ brings the total number of solutions to $48$.
Specific proof that any finitely generated $R$-module over a Noetherian ring is Noetherian.
I assume that $S$ is a submodule of $M$ and you want to prove that $S$ is finitely generated. $T=S\cap M'$ is finitely generated as a submodule of a noetherian module ($M'$ is noetherian by the induction hypothesis.) $S/T=S/S\cap M'\simeq (S+M')/M'$ and the last one is a submodule of $M/M'$. But $M/M'$ is cyclic (generated by $\bar v_{n+1}$), hence noetherian, so $S/T$ is finitely generated. That's all.
Expectation of `fractional' Geometric brownian motions
We can actually compute $\mathbb{E}[W_t^a]$ without needing to go through Ito's formula. First, note that since $S_t^a \stackrel{(d)}{=} S_t^b$ we have $$1-W_t^a = 1-\frac{S_t^a}{S_t^a+S_t^b} = \frac{S_t^b}{S_t^a+S_t^b} \stackrel{(d)}{=} W_t^a.$$ Now we can compute $$ 1 = \mathbb{E}[W_t^a + (1-W_t^a)] = \mathbb{E}[W_t^a] + \mathbb{E}[(1-W_t^a)] = 2 \mathbb{E}[W_t^a]$$ so $\mathbb{E}[W_t^a] = \frac 12$.
Definite integration - involving greatest integer function
It should not be zero. As you define it, $[y]$ is the greatest integer less or equal to the number. This is better known as the integer part, [2.01] = 2, [3.94] = 3, etc. With that in mind, for the domain you selected and keeping in mind that $\sin(x) \in [-1,1]$, for $x\in (\pi /2, \pi]$ you will have $[\sin (x)] = 0$, and then for $x\in (\pi, 3\pi /2]$, $[\sin (x)] = -1$, so then your integral would be $$ \int_{\pi/2}^{3\pi /2}[\sin (x)]\,dx = 0\int_{\pi/2}^{\pi}\,dx - \int_{\pi}^{3\pi /2}\,dx = -\pi/2 $$
Using the binomial theorem to prove that $z^n$ is continuous in $\mathbb{C}$
First, start with your definition of continuity. I prefer the definition that $f(z)$ is continuous if at each point $z_{0}$ in $\mathbb{C}$, we have that $\lim_{z \to z_{0}} f(z) = f(z_{0})$. In other words, if we choose a particular $z_{0}$, and let $z = h + z_{0}$, then we want to express $f(z)$ as $$f(z_{0} + h) = f(z_{0}) + g(h)$$ for some function $g(h)$, where $\lim_{h \to 0} g(h)= 0$. Note that $g$ depends on the choice of $z_{0}$. The point here is that the binomial theorem allows us to find a convenient expression for $g(h)$. This should help with proving that the limit is zero.
Equation of the plane tangent to the given surface
In order to make the function a surface, take $ x(y, z) $ to be $ x $ as a function of $y, z $. Then your equation becomes, $$ x(y, z)yz + x(y, z)^2 - 3y^3 + z^3 = 14 $$ Without solving for $ x $, we find the slope of $ x(y, z) $ in the y and z directions, $$ \partial_y x(y,z) y z + xz + 2 x \partial_y x - 9y^2 = 0 \\ \partial_z x(y,z) y z + xy + 2 x \partial_z x + 3z^2 = 0 $$ Solving for the derivatives at the given point, $ P=(5, -2, 3) $ $$ \partial_y x(y, z) = \frac{21}{4} \\ \partial_z x(y, z) = \frac{17}{4} $$ We can then construct a tangent plane at $$ 4(x - 5) = 21(y + 2) + 17(z - 3) $$ by the standard formula. Notice that the derivatives match and the tangent touches at the given point.
How do I write this multiplication of matrices down correctly: $(A \cdot B)^{2} = (A \cdot B) \cdot (A \cdot B)$?
One does not need to do any calculations but noting that by the definition of "squaring" square matrices (in particular $2\times 2$ matrices): $$ C^2:=C\cdot C\tag{1} $$ Now let $C=A\cdot B$. Then (1) implies that $$ (A\cdot B)^2=(A\cdot B)\cdot(A\cdot B) $$ Remark. In linear algebra, one usually doesn't write the dot $\cdot$ in matrix multiplication $A\cdot B$ since the dot is preserved for "dot product" of vectors.
$(n - 1)$-dimensional submanifold of the manifold $\mathbb R^n$
You can use the following result which is known: Let $f:M\longrightarrow N$ be a smooth map where $M$ is $(n + k)$-dimensional and $N$ is $n$-dimensional. If $q = f(p)$ is a regular value, then $f^{-1}(q)$ is a $k$-dimensional smooth submanifold. In particular deems $f:\mathbb{R}^{n}\longrightarrow \mathbb{R}$, given by $f(x)=x^{T}Ax$.
What's the probability distribution of the minimum of dependent variables?
$\min Y_j >1$ iff $|X-c_j| >1$ for each $j \in \{1,2,...,n\}$. This is true iff $X> \max_j c_j +1$ or $X < \min_j c_j -1$. So the answer is $\Phi (\mu + \sigma (\min_j c_j -1))+1-\Phi (\mu + \sigma (\max_j c_j +1))$.
Eigenfunctions of a second derivative operator
If $f$ satisfies, $$ -f''+a^2f-K(y)f=\lambda f, \quad f(-1)=f(1)=0. $$ then so do $$ f_{even}=\frac{1}{2}\big(f(x)+f(-x)\big), \quad f_{odd}=\frac{1}{2}\big(f(x)-f(-x)\big). $$ Hence, the eigenspace of our operator is spanned by odd and even eigenfunctions. We shall next show that only one of the two does not vanish identically. Clearly, if the both did not vanish identically, then they would be linearly independent - otherwise, $$ L_{even}=cL_{odd}, \quad\text{and thus}\quad L_{even}(-x)=cL_{odd}(-x)=-cL_{odd}(x)=-L_{even}(x). $$ But in such case, they would constitute a fundamental set of solutions (i.e., a basis of the set of solutions of $Ly=0$), as the set of solutions of $Ly=$ is 2-dimensional. This is impossible, since they both vanish at $x=-1$, and hence the Wronskian is equal to zero.
A simple inequality in probability
Let $a_k:=P(X=k)=P(Y=k)$. We have \begin{align*} P(|X-Y|=x)&= P(X-Y=x)+P(Y-X=x)\\ &=\sum_{k\geq 0}P(Y=k)P(X=x+k)+\sum_{k\geq 0}P(X=k)P(Y=x+k)\\ &=2\sum_{k\geq 0}a_ka_{x+k}\\ &\leq 2\left(\sum_{k\geq 0}a_k^2\right)^{1/2}\left(\sum_{k\geq 0}a_{h+k}^2\right)^{1/2}\\ &\leq 2\sum_{k\geq 0}a_k^2\\ &=2P(X=Y). \end{align*}
Triangulations, PL-triangulations and related conecpts
Part 4 is wrong: A PL manifold is a topological manifold together with a PL structure, which is an atlas with PL transition maps. No need for infinite subdivisions (but different transition maps require different subdivisions of course. Part 5 is also somewhat oddly stated. You already explained that each smooth manifold admits a smooth triangulation, which is moreover combinatorial in the sense of part 7. Part 6 is correct but outdated: it is known since 80s that there are topological 4-manifolds which are not triangulable. Last year Manolescu proved the same in all dimensions. For part 7 you can just say that the categories of PL manifolds and combinatorial manifolds are isomorphic.
$\frac{2^x - 2^{-x}}{2^x + 2^{-x}} = \frac{1}{3}$
Actually for $$\dfrac ab=\dfrac cd$$ $\implies\dfrac ac=\dfrac bd=k$(say) $k$ may not be $1$ Use https://www.qc.edu.hk/math/Junior%20Secondary/Componendo%20et%20Dividendo.htm $$\dfrac{2^x}{2^{-x}}=\dfrac{3+1}{3-1}$$ $$2^{2x}=2^1\iff2^{2x-1}=1$$ Find all real numbers $x$ for which $\frac{8^x+27^x}{12^x+18^x}=\frac76$
prove that if $\exists a\in A\space:\space stb(a)\not=\{e\}$ then the action is unfaithful
Suppose $G$ acts transitively on $A$, for a moment, and let $g\neq e$ be in the stabilizer of $a\in A$. For an arbitrary element $b\in A$, there exists $h\in G$ such that $b=h\cdot a$. Then $g\cdot b=g\cdot(h\cdot a)=h\cdot (g\cdot a)=h\cdot a=b$, so $g$ would also stabilize $b$. Thus $g$ stabilizes everything, and the action isn't faithful. Now go back to the general situation and suppose that $G$ acts on $A$ such that $g\neq e$ stablizes $a\in A$. Form the new set $C$ which is the disjoint union of $A$ and $G$, and let $G$ act on $C$ in the obvious way (by acting on elements according to $A$'s action if they are from $A$, and according to $G$'s action on itself if they are from $G$). Now, $g$ still stabilizes $a\in A$, but it can't stabilize $e\in G$, so the action is not unfaithful on $C$. This counterexample leads me to believe your question was intended to be for transitive actions only.
Finding Nash Equilibriums
To determine if a point is a Nash Equilibrium, we see if a player will deviate given that the other player holds fixed. So consider $(3, 0)$. Suppose player one picks the bottom row. Will player two change columns? Player two will be no better off by doing such, but will also be no worse off. So $(3, 0)$ is a candidate for a Nash Equilibrium. Now suppose player two is fixed at column one. Clearly, player one won't change rows, as that would make him worse off. So $(3, 0)$ is a Nash Equilibrium. The same analysis yields that $(7, 0)$ is also a Nash Equilibrium.
How to prove that Riemann integrable function is measurable?
Your argument for construction of $\phi_n$'s should also give you a decreasing sequence $\{\psi_n\}$ of simple functions such that $f \leq \phi_n$ and $\int \psi_n \to \int f $. From this you get $\sup_n \phi_n \leq f \leq \inf_n \psi_n$ and $\int [\inf_n \psi_n-\sup_n \phi_n ] =0$. This implies $ \inf_n \psi_n-\sup_n \phi_n =0$ almost everywhere from which Lebesgue measurability of $f$ is immediate.
How to compute $\int_{\lvert\,z\,\rvert\,=\,4}\frac{1}{z\cos(z)}\,dz$?
The residue at $z=0$ is $1$, and the residues at $z=\frac{\pi}{2}$ and $z=-\frac{\pi}{2}$ are both $-\frac{2}{\pi}$. Therefore the integral is equal to $2\pi\,i(1-\frac{4}{\pi})=2i(\pi-4)$.
Clarification in Siegel, combinatorial game theory
The result you want is: If $A \triangleleft B \le C$, then $A \triangleleft C$ (i.e. Right has a winning move on $A-C$). This follows from: If $X \triangleleft 0$ and $Y \le 0$ then $X+Y \triangleleft 0$. (Take $X = A-B$ and $Y = B-C$). To prove this last statement, note that $X \triangleleft 0$ means that Right has a winning move on $X$. Hence for some $X^R$, we have $X^R \le 0$. Since $Y \le 0$ also, we have $X^R + Y \le 0$. Thus $X^R + Y$ is a winning move for Right in $X+Y$. Hence $X + Y \triangleleft 0$.
How many $g$ in a finite group are such that $b=g^{-1}ag$ for given $a\ne b$ in the group?
Consider the application $$ f(a, g)=g^{-1}ag $$ then clearly $$ f(f(a, g), h)=f(a, gh) $$ If exists $g\in \mathcal F^{(a)}_b$ then $$\begin{align} h\in \mathcal F^{(a)}_b &\Leftrightarrow f(a, g)=f(a, h)\\ &\Leftrightarrow f(a, g)=f(f(a, hg^{-1}), g)\\ &\Leftrightarrow a=f(a, hg^{-1})\\ &\Leftrightarrow hg^{-1}\in C_G(a)\\ &\Leftrightarrow h\in C_G(a)g \end{align}$$ so if $F^{(a)}_b$ isn't empty there exists $g\in G$ such that $F^{(a)}_b=C_G(a)g$ then $$ \left\lvert F^{(a)}_b\right\rvert\, =\begin{cases} 0 & \text{if }\mathcal F^{(a)}_b\text{ is empty}\\ \lvert C_G(a)\rvert & \text{otherwise} \end{cases} $$ Observe that $b\neq b'$ implies that $\mathcal F^{(a)}_b\cap \mathcal F^{(a)}_{b'}=\emptyset$ so all non empty $\mathcal F^{(a)}_b$ determine a partition of $G$ so there're exactly $\lvert G : C_G(a)\rvert$ different $b\in G$ such that $\mathcal F^{(a)}_b$ is not empty.
representation of fractional number
Assuming you actually mean base 2 and that $0<x<1$ you define the following sequences $$x_0=x; \quad a_{n+1} = \lfloor 2x_n\rfloor; \quad x_{n+1} = \{2x_n\} \quad \text{for }n>0$$ Here $\{\dots\}$ and $\lfloor \dots \rfloor$ are the fractional and integer part functions.
conditions in permutation and combination when two object can be assumed as single element
You should split up in $4$ cases: the selected $7$ distinct digits contain $5$ and do not contain $6$. the selected $7$ distinct digits contain $6$ and do not contain $5$. the selected $7$ distinct digits do not contain $6$ and do not contain $5$. the selected $7$ distinct digits contain $6$ and contain $5$. In the first 3 cases the condition that $5$ and $6$ do not appear consecutively is irrelevant. Solution: $\binom767!+\binom767!+\binom777!+\binom75(7!-6!2!)$
Evaluating this complicated integral using complex analysis
Consider that $\cos{\left [ (2 m+1) \theta \right ]} = \operatorname{Re}{\left [e^{i (2 m+1) \theta}\right ]} $. Then we consider the integral $$-i \oint_{|z|=1} dz \, \sin{\left ( z+\frac1{z} \right )} z^{2 m}$$ which is equal to $i 2 \pi$ times the sum of the residues of the integrand at $z=0$. To evaluate the residues and their sum, we Taylor expand the sine term to get, as the integrand, $$-i \sum_{k=0}^{\infty} \frac{(-1)^k}{(2 k+1)!} \left ( z+\frac1{z} \right )^{2 k+1} z^{2 m} $$ The coefficient of $z^{-1}$ in the $k$th term of the expansion is the coefficient of the $k+m+1$th term. (You can work this out for successive values of $m$.) Thus, the integral is equal to $$2 \pi \sum_{k=0}^{\infty} \frac{(-1)^k}{(2 k+1)!} \binom{2 k+1}{k+m+1} = 2 \pi \sum_{k=0}^{\infty} \frac{(-1)^k}{(k-m)! (k+m+1)!}$$ To evaluate the sum, note that the terms for $k \lt m$ are zero. Then we can shift the index of the sum to get $$ 2 \pi \sum_{k=m}^{\infty} \frac{(-1)^k}{(k-m)! (k+m+1)!} = 2 \pi (-1)^m \sum_{k=0}^{\infty} \frac{(-1)^k}{k! (k+2 m+1)!}$$ The latter sum is recognizable as a Bessel function evaluated at $z=2$. Thus, $$\int_{-\pi}^{\pi} d\theta \, \sin{(2 \cos{\theta})} \cos{\left [ (2 m+1) \theta \right ]} = 2 \pi (-1)^m J_{2 m+1}(2)$$
Linear Transformation Matrix with Change of Basis
Assuming that $b1 = \begin{bmatrix}1 \\-2\end{bmatrix}$ and $b2 = \begin{bmatrix}-3 \\-5\end{bmatrix}$ are in terms of the standard basis, the matrix that change your basis to the standard one is \begin{equation} P = \left(\begin{matrix} 1 & -3\\ -2 & -5 \end{matrix}\right). \end{equation} This implies that the matrix of $T$ expressed in the standard basis must be following product: \begin{equation} P\left(\begin{matrix}4 & 6 \\5 & 3 \end{matrix}\right) P^{-1} = \left(\begin{matrix}1 & -3 \\-2 & -5 \end{matrix}\right) \left(\begin{matrix}4 & 6 \\5 & 3 \end{matrix}\right) \left(\begin{matrix}5/11 & -3/11 \\-2/11 & -1/11 \end{matrix}\right) \end{equation} This is because the input vector, that must be multiplicated by the right hand side, must be changed from the standard basis to the original one (using $P^{-1}$) and the output vector must be changed from the original basis to the standard one (using $P$).
What does it mean to *accelerate* the convergence of an iterative method?
Let $\{x_n\}_{n=1~}^\infty$ be as sequence, such that $$x_n \rightarrow \alpha, \quad n \rightarrow \infty, \quad n \in \mathbb{N},$$ and let $\{\tilde{x}_n\}_{n=1}^\infty$ be a sequence obtained from $\{x_n\}_{n=1~}^\infty$. We say, that $\{\tilde{x}_n\}$ converges faster than $\{ x_n\}$ to $\alpha$ provided $$ \frac{\tilde{x}_n- \alpha}{x_n - \alpha} \rightarrow 0, \quad n \rightarrow \infty, \quad n \in \mathbb{N}.$$ These notes are not a bad place to start reading about the acceleration of sequences.
Initial values appear from nothing
Fibonacci is defined as $$F_1 = 1, F_2 = 1\\F_{n+1} = F_n + F_{n-1}$$ and never hits zero since it is obviously increasing. If you would take $$G_1=0,G_2=0\\G_{n+1} = G_n + G_{n-1},$$ then of course the sequence will be constantly zero. EDIT: According to your link, a sequence is causal if $x_n=0$ for $n<0$, however, this means that the definition of causal sequences only applies to sequences where $x_n$ is defined both for positive and negative values of $n$. In the case of the Fibonacci sequence, however, $F_n$ is defined only for $n\geq 0$, so it is, by definition, not causal.
Determine the image of the set $G = \{ z \mid |z|0\}$ under $f(z) = \frac{z+\frac{1}{z}}{2}$
Let $\Gamma$ be a complex contour for the boundary of the set $G$ with a notch of radius $h$ around (and above) the pole of $f(z)$ at $0+0i$. We will let $h \to 0^+$ to get a better intuitive idea of the image of $G$ under $f$. Define the four points $A(-1,0),B(1,0),C(h,0),D(-h,0)$ that specify $\Gamma$. It is clear that the points map as follows: $$\begin{align} A&\mapsto(-1,0) \\ B&\mapsto(1,0) \\ C&\mapsto(h+\tfrac{1}{h},0) \\ D&\mapsto(-h-\tfrac{1}{h},0) \end{align}$$ For $z\in AB,|z|=1$ so $$f(z)=\frac{1}{2}\left(z+\frac{1}{z}\right)=\frac{1}{2}\left(z+\frac{\overline{z}}{|z|^2}\right)=\Re[z]$$ $\begin{array}{lc} \text{Therefore }&AB\mapsto \{x+iy:-1\le x\le 1,y=0\} \\[2ex] \text{Similarly }&BC\mapsto \{x+iy:1\le x\le h+\tfrac{1}{h},y=0\} \\[2ex] \text{finally } &DA\mapsto \{x+iy:-h-\tfrac{1}{h}\le x\le -1 ,y=0\} \end{array}$ Now $CD$ can be parametrised as $\{z=re^{i\theta}:r=h,0\le\theta\le\pi\}$. So for $z\in CD$, we have $$f(z)=\frac{1}{2}\left(z+\frac{1}{z}\right) \implies \frac{1}{|z|}-|z|\le|f(z)|\le \frac{1}{|z|}+|z| \qquad (\text{since }|z|<1)$$ and $$f(z)=\frac{1}{2}\left(z+\frac{\overline{z}}{|z|^2}\right) \implies \Im[z]<0\text{ for }|z|<1$$ So the image of $CD$ lies between circular arcs of radius $\frac{1}{h}-h$ and $\frac{1}{h}+h$ in the lower half-plane. Now if we let $h\to0^+$, then $C'$ tends to $(+\infty,0)$ and $D'$ tends to $(-\infty,0)$, and $CD'$ becomes an arc of radius $\infty$ clockwise from $C'$ to $D'$ in the lower half-plane. So by the open mapping theorem, $G$ is sent to the lower half-plane (note: the open interval of the real axis between $-1$ and $1$ is not included, because $|z|<1$ for $G$). That is $$f(G)=\{x+iy:x\in(-\infty,\infty),y\in(-\infty,0)\}$$
No solutions to functional equations
I'm not sure if "no solution" is the term you're looking for. If you get $f(-2) = \frac{-1}{3} = 0$ then this contradicts the very definition of what a function is. So, you can conclude that your function is not defined.
Find CDF of $Y=-X+2$
You are almost there. Since $P(Y<x)=1-F_X(-x+2)=\begin{cases}1=1-0& -x+2\le 0\\1-\frac 12(-x+2)&0< -x+2\le 2\\ 1-1&-x+2>2\end{cases}$ as you noticed, we have $$F_Y(x)=\begin{cases}0&x<0\\\frac 12 x&0\le x< 2\\ 1& x\ge 2\end{cases}.$$
Uniqueness of automorphisms $\theta$ and $\psi$ in polar decomposition of $\alpha=\psi\theta$
If you first think of $\theta$ and $\theta'$ as positive definite matrices, then $\theta^2$ (which is function composition in your setup) becomes matrix multiplication. The result I was trying to refer to is that since $\theta^2$ is positive definite, you can take a "square root" by diagonalizing it and replacing the eigenvalues with their square roots. Similarly for $(\theta')^2$. Note that this square root is unique, so they must be $\theta$ and $\theta'$ respectively. In short, $\theta^2 = (\theta')^2$ implies $\theta=\theta'$. The argument for operators is essentially the same. Here is a post that goes through the details.
Line integrals and reparametrization
Part B: Yes, this parametrization is correct. But also don't forget to state the range of the parameter $t$, i.e. $\ldots\le t\le\ldots$, so that the trajectory actually starts and stops where it's supposed to. Part C: Yes, as you said simply find the divergence of $G$, and then integrate along the path $\tau$. This means that you will substitute the parametric expressions for $x$, $y$, and $z$ into the expression for $\operatorname{div}G$. But also make sure that you set up the line integral with respect to the arclength $ds$ (as it's given), where $ds=\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2+\left(\frac{dz}{dt}\right)^2}\,dt$.
Finding the determinant of $2A+A^{-1}-I$ given the eigenvalues of $A$
Hint: If $A=PDP^{-1}$ then $S=P(2D+D^{-1}-I)P^{-1}$.
Question about elements in a cartesian closed category.
No. Toposes in which that statement is true are called well-pointed and they are relatively rare things. Cartesian closure is therefore certainly not enough to induce it. To connect what you said more closely to well-pointedness, note that your statement is essentially that $\mathcal{C}(1,-)$ is conservative. A conservative functor is faithful if it preserves equalizers and $\mathcal{C}$ has them. $\mathcal{C}(1,-)$, like all covariant hom-functors, preserves all limits including equalizers. Toposes, by definition, have equalizers. This condition implies that $F$ is a strong functor (which it always will be in $\mathbf{Set}$). In particular, it implies $F(A\times X)\cong A\times FX$ natural in $A$ and $X$. A strong functor is more or less one where the functorial action can be internalized meaning instead of just having a family of functions $\mathcal{C}(A,B)\to\mathcal{C}(FA,FB)$ we have a natural transformation $[A,B]\to[FA,FB]$ where $[A,B]$ is the internal hom of $\mathcal{C}$ which, with respect to the cartesian monoidal structure, is $B^A$. At any rate, functors don't need to be strong and it should already have been a warning sign that this isomorphism required $F$ and $G$ to be endofunctors.
quotient map surjective and linear
This is an illustration of how using symbols instead of words leads to misunderstanding. By asking you to show $Q\in L(X,Y)$ with $\|Q\|=1$ the author meant that you should check that $Q$ is bounded (and, specifically, has norm $1$). The linearity was implicitly understood from the beginning: a quotient map, i.e. a linear map between normed spaces such that The proof you gave is correct.
Unique representation of reals by (infinite) application of the (+,-,/,*,^) operations to elements of $\mathbb Q$?
You can use the continued fraction representation. You can use the decimal expansion, if you avoid infinitely repeating $9$'s. You can use a harmonic sum: for $x \gt 0$ add up all the terms in $\sum_n \frac 1n$ that keep it less than $x$, for negative $x$ subtract them. None of these are products, but the last two are sums, so you can represent $\ln x$ and then you can take the exponential.
About a property of Reynolds operator (Invariant theory)
First, 'constructing' this complement in such generality certainly cannot be done. The point is just that A can be broken up into a direct sum $A^G \oplus \bigoplus_{i} V_i$, where each $V_i$ is an isotypic $G$-submodule of $A$, i.e., a submodule such that any two simple submodules of $V_i$ are $G$-isomorphic. Moreover, this decomposition is unique up to relabeling, so in particular there is a unique $G$-module complement to $A^G$ in $A$, and it makes sense to define this projection without reference to an auxiliary decomposition. So in this case 'canonical' means something more like 'uniquely determined by the data' than 'directly describable from the data'. For your second question, the main point is that it suffices to show that if $W$ is a simple $G$-submodule of $A$ not lying in $A^G$ and $a \in A^G$, then $aW$ is also a $G$-submodule intersecting $A^G$ trivially. So let $w \in W$, $a \in A^G$, and $g \in G$. We have \begin{align} g(aw) = g(a)g(w) = ag(w), \end{align} so $aW$ is a $G$-submodule and the map $W \to aW$, $w \mapsto aw$, is a surjective $G$-module homomorphism. Thus either $aW$ is trivial or it is isomorphic to $W$, and in either case it intersects $A^G$ trivially. So multiplication by elements of $A^G$ preserves the $G$-stable complement to $A^G$ in $A$ and this gives the desired property for the projection.
$0 \leq Y \leq M$ random variable, $p > 1$. Calculate $\mathbb{E}(Y^p)$
By definition $$ \mathbb{E}[Y^p] = \int_0^{M} y^p f_Y(y)dy\ , $$ where $f_Y(y)$ is the pdf of $Y$, and thus equal to $f_Y(y)=\frac{d}{dy}\mathbb{P}[Y<y]=-\frac{d}{dy}\mathbb{P}[Y>y]$. Hence $$ \mathbb{E}[Y^p] = -\int_0^{M} y^p \frac{d}{dy}\mathbb{P}[Y>y]dy\ , $$ and using integration by parts $$ \mathbb{E}[Y^p] = -\left[y^p \mathbb{P}[Y>y]\right]\Big|_0^M + \int_0^M dy\ p y^{p-1}\mathbb{P}[Y>y]=\boxed{\int_0^M dy\ p y^{p-1}\mathbb{P}[Y>y]} $$
If $L_1 \cap L_2$ is decidable, prove/disprove that $L_1$ and/or $L_2$ are decidable
Let $U$ be an undecidable subset of the natural numbers. Let $L_1$ be the set of all $2x$, where $x$ ranges over $U$, and let $L_2$ be the set of all $2x+1$, where $x$ ranges over $U$. Then $L_1\cap L_2$ is empty, hence trivially decidable, but $L_1$ and $L_2$ are not. Remark: In more "language" language, let $L_0$ be an undecidable language over the alphabet $\{x\}$. Let $L$ be the same language, considered over the alphabet $\{x,a,b\}$. Let $L_1$ be obtained from $L$ by replacing every occurrence of $t$ by the letter $a$, and let $L_2$ be obtained in the same way, using the letter $b$. Then $L_1\cap L_2$ is clearly decidable, while $L_1$ and $L_2$ are not.
How do I factor $z^4+2z^3+4z^2+2z+3$?
Comparing $z^4+2z^3+4z^2+2z+3$ with $(z^2+az+1)(z^2+bz+3)$ gives you $a=0,b=2$, namely,$$z^4+2z^3+4z^2+2z+3=(z^2+1) (z^2+2 z+3).$$
How would you write a number in base 100?
To write numbers in base $100$ you need $100$ different "digits", starting with $0$ and ending with whatever represents $99$. I would use the (base $10$) numbers $0, \ldots, 99$ for the digits, so, for example the number $12345$ (in base $10$) is $(1)(23)(45)$ in base $100$. You just group the ordinary digits in pairs, starting from the right. So $199$ would be $(1)(99)$.
Dimension of $M_2(\Gamma_0(14))$
Note: In my answer I will simplify notation, writing $M_2(N)$ to mean $M_2(\Gamma_0(N))$. You are correct that $\dim(M_2(7))=\dim(M_2(2))=1$. These spaces contain the Eisenstein series of weight 2 as described in section 3.5 of these notes (http://www.few.vu.nl/~sdn249/modularforms16/Notes.pdf). In particular, $$E_2^{(7)}(z)=E_2(z)-7E_2(7z)\tag{1}$$ is a modular form in $M_2(7)$ and $$E_2^{(2)}(z)=E_2(z)-2E_2(2z)\tag{2}$$ is a modular form in $M_2(2)$. These are oldforms as you've stated and together span a 2-dimensional subspace of $M_2(14)^{\text{old}}$. However, the definition of oldforms also encompasses modular forms of the form $f(dz)$ where $f(z)$ is a modular form at a level $M$ strictly dividing $N$ and $d$ divides $N/M$. Hence, the forms $E_2^{(7)}(2z)$ and $E_2^{(2)}(7z)$ are also oldforms. Explicitly we have $$E_2^{(7)}(2z)=E_2(2z)-7E_2(14z)\tag{3}$$ and $$E_2^{(2)}(7z)=E_2(7z)-2E_2(14z)\tag{4}.$$ The forms (1), (2), (3) and (4) form a spanning set for $M_2(14)^{\text{old}}$. Although they are linearly dependent, any 3 of them can be used to form a basis for $M_2(14)^{\text{old}}$. This explains why $\dim(M_2(14)^{\text{old}})=3$.
An inequality for symmetric matrices: $ \mbox{trace}(XY)\leq \lambda(X)^T\lambda(Y) $.
The proof is essentially the same as in the case where $X$ and $Y$ are positive semi-definite. Let $d_1\le\ldots\le d_n$ and $s_1\le\ldots\le s_n$ be the eigenvalues of $X$ and $Y$ respectively and let $D=\operatorname{diag}(d_1,\ldots,d_n),\ S=\operatorname{diag}(s_1,\ldots,s_n)$. Then $\operatorname{trace}(XY)=\operatorname{trace}(DQ^TSQ)$ for some real orthogonal matrix $Q$. Note that the Hadamard product $Q\circ Q=(q_{ij}^2)$ is a doubly stochastic matrix. Therefore \begin{align} \max_{QQ^T=I} \operatorname{trace}(DQ^TSQ) &=\max_{QQ^T=I} \sum_{i,j}s_id_jq_{ij}^2\\ &\le\max_M \sum_{i,j}s_id_jm_{ij};\ M=(m_{ij}) \textrm{ is doubly stochastic}.\tag{1} \end{align} As $\sum_{i,j}s_id_jm_{ij}$ is a linear function in $M$ and the set of all doubly stochastic matrices is compact and convex, $\max\limits_M \sum_{i,j}s_id_jm_{ij}$ occurs at an extreme point of this convex set. By Birkhoff's theorem, the extreme points of doubly stochastic matrices are exactly those permutation matrices. Since permutation matrices are real orthogonal, it follows that equality holds in $(1)$ and $\max\limits_{QQ^T=I} \operatorname{trace}(DQ^TSQ)=\max\limits_\sigma\sum_i d_is_{\sigma(i)}$, where $\sigma$ runs over all permutations of the indices $1,2,\ldots,n$. However, by mathematical induction on $n$, one can prove that $\sum_i d_is_{\sigma(i)}\le\sum_i d_is_i$ (note that the base case here is $n=2$, not $n=1$). Hence the assertion follows.
Change of variables in $k$-algebras
Say $I \subseteq m:=(x-a_1, \dots, x-a_n)$ (we are using the Nullstellensatz, here). Then define an isomorphism of $k$-algebras $\phi: k[x_1, \dots, x_n] \to k[x_1, \dots, x_n]$ by $x_i \mapsto x_i + a_i$. Clearly $\phi(m)=(x_1, \dots, x_n)$. Take $J:=\phi(I)$, and notice that one has a $k$-algebra isomorphism $$k[x_1, \dots, x_n]/I \to k[x_1, \dots, x_n]/J.$$
Under what conditions does a series of matrices converge?
This is a geometric series. Let $B=\gamma A$, and use the "telescoping" formula $$(I-B)(I+B+B^2+B^3+\cdots+B^m)=I-B^{m+1}.$$ Taking limits, if we assume that $\lim_{m\to\infty}B^m=0$, then $$(I-B)(I+B+B^2+B^3+\cdots)=I-0=I$$ thus $$I+B+B^2+B^3+\cdots=(I-B)^{-1}.$$ And if the sum converges at all, then the individual terms must approach $0$, so it's necessary that $\lim_{n\to\infty}B^n=0$. This condition $B^n\to0$ is equivalent to having the spectral radius $\rho(B)<1$.
Suppose $\mu$ is not an eigenvalue of A. Show that the equation $x'= Ax + e^{\mu t}b$.
If $\phi(t) = ve^{\mu t}$ is a solution to $x' = Ax + e^{\mu t} b, \tag{1}$ then $\phi'(t) = \mu v e^{\mu t} = A\phi(t) + be^{\mu t} = e^{\mu t} A v + e^{\mu t} b, \tag{2}$ so $\mu v = Av + b, \tag{3}$ or $(\mu - A)v = b. \tag{4}$ (4) is a necessary condition for $\phi(t) = ve^{\mu t}$ to solve (1). In the event that $\mu$ is not an eignvalue of $A$, then $A - \mu$ is nonsingular, hence for any $b$ there is a $v$ such that (4) holds. Then we can run the steps (1)-(4) in reverse order; multiplying (3) by $e^{\mu t}$ yields (2), showing $\phi(t) = ve^{\mu t}$ solves (1). The condition (4) is thus also sufficient when $\mu$ is not an eigenvalue of $A$. QED. Hope this helps. Cheerio, and as always, Fiat Lux!!!
How to solve for n in a summation?
Assume the year has $m$ months. We want to find minimal $n$ such that $$ \frac{m!}{(m-n)!\cdot m^n}\le\frac12$$ The factorial function is very unhandy if one wants to solve an expression like this. For small $m$, the probably easiest ways is indeed trying $n=1,2,\ldots $ in sequence until one gets lucky. For muach larger values of $m$ one could emply a nice approximation for the factorial that makes it much easier to handle: $m!\approx m^me^{-m}\sqrt{ 2\pi m}$ (and, yes, that expression is nicer to handle). With this, we want to solve $$ \frac{m^me^{-m}\sqrt{2\pi m}}{(m-n)^{m-n}e^{-(m-n)}\sqrt{2\pi (m-n)}\,m^n}\approx \frac 12$$ The left side simplifies to $$ \frac{1}{\left(1-\frac nm\right)^{m-n+\frac12}e^n}$$ We know that for $m\gg n$, $\left(1-\frac nm\right)^m\approx e^{-n}$ so that we want to solve $$ \left(1-\frac nm\right)^{n-\frac12}\approx \frac12$$ By ignoring all higher terms in an expansion of the left hand side this becomes $$ 1-(n-\tfrac12)\cdot \frac nm\approx \frac12$$ and ultimately the approximate solution $n\approx \sqrt{\frac m2}$. However, this result is quite far off (we just made too many approximations and simplifications above) as it suggests $n\approx 2.4$ instead of $n=5$ when $m=12$; it als suggests $n\approx 13.5$ instead of $n=23$ when $m=365$. (The main problem is that $m\not\gg n$ so that $\left(1-\frac nm\right)^m\not\approx e^{-n}$)
Unitary equivalence of compact operators
As stated, the theorem is false. What is missing is that the complete isometry should map $A$ to $B$ and be unital. For an easy example where the theorem fails as stated, take $$ A=\begin{bmatrix}0&1\\0&0\end{bmatrix},\ \ \ B=\begin{bmatrix} 1&1\\0&1\end{bmatrix} $$ (they generate the same unital operator space, but they are obviously not unitarily equivalent). Even in that case, the nontrivial implication ($\Leftarrow$) is definitely not straightforward. Here is a very rough sketch of an argument. A unital completely isometry between the operator spaces $\text{span}\,\{I,A\}$ and $\text{span}\,\{I,B\}$ can be extended uniquely to a ucp map $\phi$ between the operator systems $\text{span}\,\{I,A, A^*\}$ and $\text{span}\,\{I,B,B^*\}$ with ucp inverse (i.e. a complete order isomorphism). Since $C^*(I,A)=I+K(H)$ (because $A$ is irreducible), by Arveson's Boundary Theorem the identity map $C^*(I,A)\to C^*(I,A)$ is a boundary representation, and similarly for $B$. The fact that the identity is a boundary representation is enough to guarantee that $C^*(I,A)$ and $C^*(I,B)$ are the C$^*$-envelopes of $\text{span}\,\{I,A,A^*\}$ and $\text{span}\,\{I,B, B^*\}$ respectively. Arveson's Choquet theorem then guarantees that $\phi$ extends to a C$^*$-isomorphism between $C^*(I,A)$ and $C^*(I,B)$. As these are irreducible and contain compact operators, one can show that the isomorphism is implemented by unitary conjugation. There is a rather complete proof in this paper by Doug Farenick (I'm not sure if this preprint is as updated as the published version in LAA).
Constrained minimization: characterizing derivatives of optimum with respect to parameters
I think I have solved this, but it would be extremely useful to get some feedback to verify this solution. We can get rid of the constraint $a+b=y$ and write the problem as $$f(x,y)=\min_{a\in(y-x,y]\cap[0,x)}\ \{h(x,a)+g(x,y-a)\}$$ Restricting attention to interior solutions where $0<a<y$, the first-order condition is $$h_a(x,a^*(x,y))=g_b(x,y-a^*(x,y))$$ We can write $f_x$ as $$f_x(x,y)=h_x(x,a^*(x,y))+h_a(x,a^*(x,y))a^*_x(x,y)+g_x(x,y-a^*(x,y))-g_b(x,y-a^*(x,y))a^*_x(x,y)$$ Using the FOC, this simplifies to $$f_x(x,y)=h_x(x,a^*(x,y))+g_x(x,y-a^*(x,y))$$ This is a bit comforting, as it appears to be the standard envelope condition result. We can write $f_y$ as $$f_y(x,y)=h_a(x,a^*(x,y))a^*_y(x,y)+g_b(x,y-a^*(x,y))\left[1-a^*_y(x,y)\right]$$ Using the FOC again, we get $$f_y(x,y)=g_b(x,y-a^*(x,y))=h_a(x,a^*(x,y))$$ This verifies the answer I proposed in item (iii) of my question. For corner solutions, I believe the answers proposed in (i) and (ii) of my question are correct.
Question about uniform convergence on $\mathbb{Q}$ implying uniform convergence on $\mathbb{R}$
Recall that Cauchy's criterion for uniform convergence is that $$\{f_n\} \text{ uniformly convergent on } E \iff \forall\epsilon>0, \exists N,\forall n,m>N,\forall x\in E, |f_n(x)-f_{m}(x)|\le\epsilon.$$ Now that $\mathbb{Q}$ is dense in $\mathbb{R}$, and $f_n-f_m$ is continuous, the condition $$\forall x\in\mathbb{Q}, |f_n(x)-f_{m}(x)|\le\epsilon\implies \forall x\in\mathbb{R},|f_n(x)-f_{m}(x)|\le\epsilon.$$ Thus, for continuous functions $\{f_n\}$, uniform convergence on $\mathbb{Q}$ implies uniform convergence on $\mathbb{R}$.
Cut-Vertex and Cut-Edge in Connected Graphs
The question is a little vague- for cut-edges, there is a precise answer, namely that the number of components increases by precisely one. This is fairly easy to see- if $e$ is a cut-edge, there will be one component of $G-e$ that contains one endpoint of $e$ and another component that contains the other end point. These are different components since $e$ is a cut-edge, and adding $e$ makes them into one component. However, as you've already shown, you can't nail this down to an exact number for all cut-vertices, nor can it be determined just by the degree of the cut-vertex. I don't know of any generalized formula for how many components the deletion of a cut-vertex would add. I imagine the point of this question was for you to show that unlike cut-edges, cut-vertices do not add a predetermined number of components.
Understanding the definitions of vector and scalar
Let's consider two frames $S$ and $S'$. Positions in $S'$ are related to $S$ by a rotation $$\vec r\,'=R\,\vec r.$$ Then for a function to be a scalar means that $$T'(\vec r\,')=T(\vec r)$$ or equivalently $$T'(\vec r)=T(R^{-1}\vec r).$$ These equations say that if I want to find some scalar in the $S'$ frame (like temperature) I can use the same field$^*$ as in the $S$ frame but I just have to plug in the transformed position. The field itself doesn't change. For a vector field this is no longer the case. To get the vector in the $S'$ frame I not only have to transform the position vector, but also the vector itself. Take a look at this diagram: From the perspective of $S'$ the vector rotated along with the position vector$^{**}$ so we have $$\vec A\,'(\vec r\,')=R\vec A(\vec r)$$ $^*$ A field is just a quantity that depends on position. If we consider objects that are not fields we just get $T'=T$ and $\vec A\,'=R\vec A$. $^{**}$ Confusingly enough this depends on whether we are looking at transformations of vectors $\vec A$ or vector components $A_i$. Some textbooks transform the basis vectors $\vec e_i$ such that the components $A_i$ change in the opposite way but the total vector $\vec A=\sum_i A_i\vec e_i$ remains constant. Suddenly we could have a $R^{-1}$ instead of $R$. Always make sure this makes sense for yourself.
Disconnected Probabilities
Imagine that the shovels and axes have either red or blue handles. I.e., there are a total of four blue shovels, 4 red shovels, 2 blue axes and 2 red axes. Label each worker 1 through 12. There is a bijection between the permutations of the multiset $\{4\cdot s_r,~ 4\cdot s_b,~ 2\cdot x_r,~ 2\cdot x_b\}$ and each person's team/tool assignment. For example, $s_b, s_b, s_r, s_r, x_b, x_r, s_b, s_b, s_r, s_r, x_b, x_r$ corresponds to persons 1,2,7, and 8 getting shovels for the blue team, persons 3,4,9,10 getting shovels for the red team, etc. The number of different combinations is then $\dfrac{12!}{4!4!2!2!} = 207900$ If you are unable to distinguish between the red team and the blue team, then divide by two for symmetry. An equivalent way of doing this would be breaking it into steps. Pick who has axes for blue team, pick who has axes for red team, pick who has shovels for blue, pick who has shovels for red, giving you $\binom{12}{2}\binom{10}{2}\binom{8}{4}\binom{4}{4}$ for the same result.
What is the right notion of separator in a 2-category?
The question reduces to that or what a faithful 2-functor is: you want the Yoneda embedding, restricted to the separator, to remain faithful. The strictest possible notion, that a separator separates 1-morphisms in the usual sense and 2-morphisms in the analogous sense, is probably fine for some purposes. However, it's perverse: we've proposed that a 2-functor $F:K\to L$ is faithful when it induces faithful, injective-on-objects functors $K(a,b)\to L(Fa,Fb)$ for every $a,b$. But this is not invariant under equivalence. In fact, up to equivalence, a faithful injective-on-objects functor is no different than a faithful functor. So you can consider adding additional conditions on $K(a,b)\to L(Fa,Fb)$. Natural options are that it be conservative, or that it be injective on isomorphism classes of objects. But these are independent conditions, which are again both independent of being injective-on-objects! I think the lesson of this all is that "faithful", and thus "separator", are not words that generalize naturally to 2-category theory, in essence because there isn't a single obvious notion of monomorphism of categories.
Proving Schwarz inequlity.
Yes, the proof is correct! But the line $$ (x_1y_2-x_2y_1) = \sqrt{(x_1^2+x_2^2)\cdot(y_1^2+y_2^2) - (x_1y_1+x_2y_2)^2} $$ is redundant; you can shorten your proof by skipping it.
Cyclic representation on $L^2(\mu)$
As I already stated in the comment, the cyclic vectors are exactly those $f \in L^2$ for which $f(x) \neq 0$ holds for almost every $x \in X$. It is easy to see that if $M := \{x \mid f(x) = 0\}$ has positive measure, then by $\sigma$-finiteness, there is some set $M' \subset M$ of finite positive measure. But then $$ \overline{\{\phi \cdot f \mid \phi \in L^\infty\}} \subset \{g \in L^2 \mid g(x) = 0 \text{ for a.e. } x \in M\}, $$ so that $g = \chi_{M'} \in L^2 \setminus \overline{\{\phi \cdot f \mid \phi \in L^\infty\}}$, which shows that $f$ is not cyclic. Now assume that $f(x) \neq 0$ almost everywhere. By considering $$ \tilde{f} = |f| = \frac{\overline{f}}{|f|} \cdot f, $$ we can assume w.l.o.g. that $f \geq 0$ (note that $\overline{f}/|f| \in L^\infty$). Also, the linear span of all indicator functions $\chi_M$ with $M$ measurable, of finite measure is dense in $L^2$ (why?), so that it suffices to show that we can approximate each $\chi_M$ in the $L^2$ norm using functions of the form $\phi \cdot f$, $\phi \in L^\infty$. Let $K_n := \{x \mid f(x) \geq 1/n\}$. Note that $M = \bigcup_n (K_n \cap M)$ (up to a set of measure zero), where the union is increasing. By (e.g.) dominated convergence, this implies $$ \Vert \chi_M - \chi_{M \cap K_n} \Vert_2 = \sqrt{\mu(M \setminus K_n)} \to 0, $$ so that it suffices to approximate each $\chi_{M \cap K_n}$. But $\phi := \chi_{M \cap K_n} \cdot \frac{1}{f} \in L^\infty$ (why?) with $$ \phi \cdot f = \chi_{M \cap K_n}, $$ which completes the proof.
Find collision point between vector and fencing rectangle
Let $a = \cos(\theta)$ and $b = \sin(\theta)$, where $\theta$ is the angle from the $x$-axis to the angle of your ray. Now solve the four equations $$ x + a t_1 = width \\ y + b t_2 = height \\ x + a t_3 = 0 \\ y + b t_4 = 0 $$ by computing (assuming a and b are different from zero) : $$ t_1 = (width - x) / a \\ t_2 = (height - y) / b\\ t_3 = -x/a \\ t_4 = -y / b $$ Among all the numbers $t_1, \ldots, t_4$, compute the smallest nonnegative one, and call that $t$. So it $t_1 = 4, t_2 = -2, t_3 = 3, t_4 = 9$, then $t = 3$. If all four are negative numbers (which can happen only if $(x, y)$ is outside the rectangle), then there's no intersection. Assuming there is at least one non-negative $t_i$, the intersection point is at $(x + ta, y + tb)$.
Does proper contraction on Hilbert space necessarily lead to convergence in norm to zero?
Let $\{a_n\}$ be a sequence of positive real numbers that increases to $1$, with the property that the sequence of products $$ a_1,\ a_1a_2,\ a_1a_2a_3,\ a_1a_2a_3a_4,\ \ldots$$ converges to a positive value. It's not hard to write down a specific example. Let $\cal H = \ell^2(\mathbf R)$ and define $T : \cal H \to \cal H$ by $$T(x_1,x_2,x_3,\ldots) = (0, a_1x_1,a_2x_2,a_3x_3,\ldots).$$ Clearly $T$ is linear and bounded, and $\|Th\| < \|h\|$ for all (nonzero) $h$. Let $h = (1,0,0,0,0,\ldots)$. What is $\|T^nh\|$ equal to?
Proof of Integer Remainder Summation Conjecture
It seems wrong. Take $N=n=3$, $h_1=6,h_2=9,h_3=12$: then $f(n,N)=3$, $g(n,N)=6$.
Extremes of: $f(x,y,z)= (x-1)^2y^2(z+1)^2$ with: $x^2 + y^2 + z^2 \leq 1$
The result you found from the first partial derivatives is correct in showing that they are equal to zero for $ \ x = 1 \ , \ y = 0 \ , \ z = -1 \ \ $ (the three planes in the graph at left above). So the function $ \ f(x,y,z) = (x-1)^2 · y^2 ·(z+1)^2 \ $ has only a single critical point at $ \ (1 \ , \ 0 \ , -1) \ . $ Since this point is not on or within the unit ball $ \ x^2 + y^2 + z^2 \leq 1 \ \ , $ there is no critical point satisfying the constraint. What we do have is a region for which the function has the absolute minimum value of zero (since $ \ f(x,y,z) \ $ is non-negative); this is the unit disc, centered on the origin, lying in the plane $ \ y = 0 \ $ (which includes the two points $ \ (1 , 0 , 0 ) \ $ and $ \ ( 0 , 0 , -1) \ $ on the ball's surface). The situation you have with the Hessian matrix is that every entry has factors which equal zero at the values you found for the coordinate variables, so the second partial derivative test is inconclusive. This critical point is called "degenerate" by some authors: this often occurs when the point belongs to a line, curve, or even region for which the Hessian is zero. For such a situation (analogously to the second derivative test in single-variable calculus being inconclusive), we must look at the properties of the function itself (and possibly those of the first derivative function). The Lagrange system of equations is $$ 2 · (x-1) · y^2 ·(z+1)^2 \ = \ \lambda · 2x \ \ , \ \ 2 · (x-1)^2 · y ·(z+1)^2 \ = \ \lambda · 2y \ \ , $$ $$ 2 · (x-1)^2 · y^2 ·(z+1) \ = \ \lambda · 2z $$ $$ \Rightarrow \ \ \lambda \ = \ \frac{(x-1) · y^2 ·(z+1)^2}{x} \ = \ \frac{(x-1)^2 · y ·(z+1)^2}{y} \ = \ \frac{(x-1)^2 · y^2 ·(z+1)}{z} $$ $$ \Rightarrow \ \ x · (x-1) \ = \ y^2 \ = \ z · (z+1) \ \ . $$ The first and last parts of the chain equation yield $$ x^2 - x \ = \ z^2 + z \ \ \Rightarrow \ \ x^2 - z^2 \ = \ x + z \ \ \Rightarrow \ \ (x - z) · (x + z ) \ - \ (x + z) \ = \ 0 $$ $$ \Rightarrow \ \ (x - z - 1) · (x + z ) \ = \ 0 \ \ . $$ We thus wish to look for extrema on the intersection disks of the unit ball with the planes $ \ z = x - 1 \ $ [in yellow] or $ \ z = -x \ \ $ [in violet]. Because of the form of the function, it is clear that a local maximum of the function would occur at a point as far from $ \ (1 , 0 , -1) \ $ as possible, which makes the latter plane the better candidate to investigate. We can compare $ \ f(x,y,x-1) = (x-1)^2 · y^2 ·x^2 \ $ with $ \ f(x,y,-x) = (x-1)^2 · y^2 ·(-x+1)^2 \ = \ (x-1)^4 · y^2 \ $ to observe that $$ \ x^2 · (x-1)^2 \ \le \ (x-1)^4 \ \ \Rightarrow \ \ x^2 \ \le \ x^2 - 2x + 1 \ \ \Rightarrow \ \ 2x - 1 \ \le \ 0 \ \ . $$ So for $ \ x < \frac12 \ , $ it will suffice to consider the $ \ z = -x \ $ plane; if we fail to find an evident local maximum solution there, we will need to investigate the situation for the other plane. In the right-hand graph at the top, the view is downward toward the point $ \ (1,0,-1) \ \ . $ The function is symmetric about the $ \ xz-$ plane ( $ y = 0 $ ) , so the solution we seek will have the form $ \ (x \ , \ \pm y \ , -x ) \ \ . $ We would want a solution as far from that symmetry plane as possible. We don't actually know (though we may suspect) that the solution points are on the surface of the ball, so we will consider concentric spheres centered on the origin $ \ x^2 + y^2 + z^2 \ = \ r^2 \ , \ 0 \le \ r \le 1 \ $ through which we will take "slices" at $ \ y \ne 0 \ , $ and then examine those points for which $ \ z = -x \ \ . $ The graph below presents the geometrical situation. For the set of concentric spheres that intersect a $ \ y-$ slice, we have concentric circles $ \ x^2 + y^2 + z^2 \ = \ r^2 \ \Rightarrow \ x^2 + z^2 \ = \ r^2 - y^2 \ . $ The two points for which $ \ z = -x \ $ then satisfy $ \ 2x^2 \ = \ r^2 - y^2 \ \Rightarrow \ x^2 = \frac{r^2 - y^2}{2} \ . $ The function becomes $$ (x-1)^2 · y^2 ·(z+1)^2 \ \ \rightarrow \ \ (x-1)^2 · (r^2 - 2x^2) ·(-x+1)^2 \ = \ (x-1)^4 · (r^2 - 2x^2) \ \ . $$ Differentiating this expression and setting the derivative equal to zero produces $$ 4·(x-1)^3 · (r^2 - 2x^2) \ + \ (x-1)^4 · (-4x) \ = \ 4 · (x-1)^3 · (r^2 - 3x^2 + x) \ = \ 0 \ \ . $$ The solution $ \ x = 1 \ $ is extraneous, since it can only represent the point $ \ (1, 0, 0) \ , $ which is not on the $ \ y-$ slice. So we must have $ \ 3x^2 - x - r^2 \ = \ 0 \ , $ for which the solutions are $$ x \ = \ \frac{1 \ \pm \ \sqrt{1 + 12r^2}}{6} \ \ . $$ We want $ \ |x-1| \ $ to be as large as possible. For the "positive square-root" solutions, this would call for setting $ \ r = 0 \ , $ which would mean the solution point is the origin; this is inconsistent with the conditions discussed. The "negative square-root" solutions will be on the surface of the ball with $ \ r = 1 \ . $ These are $$ x \ = -z = \ \frac{1 - \sqrt{13}}{6} \ \approx \ -0.4343 \ \ , \ \ y \ = \ \pm \sqrt{1 - \left(\frac{7 - \sqrt{13}}{9} \right)} \ = \ \pm \frac{ \sqrt{2 + \sqrt{13}}}{3} \ \approx \ \pm 0.7892 \ \ . $$ So we do in fact find two solution points $ \ \left( \ \frac{1 \ - \ \sqrt{13}}{6} \ , \ \pm \frac{\sqrt{2 \ + \ \sqrt{13}}}{3} \ , \ \frac{\sqrt{13} \ - \ 1}{6} \ \right) \ , $ at which is attained the constrained maximum value for the function $$ (-0.4343 - 1)^2 · (\pm 0.7892)^2 · (0.4343 + 1)^2 \ \approx \ 2.6358 \ . $$ [This is also the value found by WolframAlpha, without any indication of how it is computed.]
What is the difference between a natural transformation and a 2-morphism?
2-categories are a generalization of categories. They do not only possess objects and morphisms between those objects, they also possess "morphisms between the morphisms", so-called 2-morphisms (all obeying various axioms, of course). A specific exaple of a 2-category is the 2-category of small categories, whose objects are categories, whose morphisms are functors between those categories and whose 2-morphisms are natural transformations between those functors. Thus, 2-morphism is the general notion and natural transformations are a specific example thereof.
Transitioning Between Two Bases
The elements of $j$-th column of $T$, $t_{1j}, t_{2j}, \cdots, t_{nj}$, are the expansion coefficients of the $j$-th basis $v_j$ with respect to new basis $w_1,\cdots, w_n$, i.e. $v_j = \sum_{i=1}^nt_{i,j}w_i$. Likewise, the elements of $i$-th row of $T$, $t_{i1}, t_{i2}, \cdots, t_{in}$, are the expansion coefficients of the $i$-th basis $w_i$ with respect to old basis $v_1,\cdots, v_n$, i.e. $w_i = \sum_{j=1}^nt_{i,j}v_j$.
convolution of random variables
Your variable distribution is Rayleigh. The sum of independent Rayleigh (I assume they are indepent,) do not have a closed form solution. A bound for the weighted sum is given here, in terms on the non weighted sum: http://sankhya.isical.ac.in/pdfs/60a2/6883fnl.pdf You should google "sum of Rayleigh" to get material (there are some restricted papers: http://portal.acm.org/ft_gateway.cfm?id=1552090&type=pdf http://ieeexplore.ieee.org/iel5/4234/30221/01388722.pdf?arnumber=1388722 ) Regarding your observation "If $k$ is large enough, the law of large numbers may be used." - I guess you mean the central limit theorem- bear in mind that that depends on the behaviour of $c_i$. For example, if they decrease exponentially, the CLT cannot be applied. Bear also in mind that the CLT, as an asymptotic expansion, can be corrected for finite N using a Edgeworth series with a few terms. Not very straightforward, though.
real analysis limit problem
Your proof is nearly right. You only need to insert at the appropriate place that $f$ is uniformly continuous so that you can select $δ$ independent of $x_0$. You can also connect $N$ more explicitly to the support of $f$, so that not only the difference, but the function values themselves are zero.