title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
If $E(Xi'Xi)$ is invertible and $\Sigma>0$, is $E(Xi'\Sigma^{-1}Xi)$ invertible?
Usually there is an additional assumption like $rk(X)=K$ which implies that both $E(X^TX)$ and $E(X^T \Omega^{-1}X)$ are full rank. The empirical averages $E_N(X_i^TX_i)$ and $E_N(X_i^T \Omega^{-1}X_i)$ are invertible provided that $N$ is sufficiently large (by the law of large numbers).
Link between a topological space and a manifold
The topology of the manifold is given by unions of pre-images of open subsets of $\mathbb R^n$ under the charts of an atlas. That is, instead of the requirement that $M$ is a priori a topological space covered by homeomorphisms to $\mathbb R^n$, we can use an atlas of bijective charts with homeomorphic transition maps to define its topology.
Is there a way to prove a boolean operator isn't universal?
Yes, though the details generally depend considerably on the specific operator or operators. This answer is an example of the kind of argument that can be used. For more information you might read about functional completeness. Added: If $T(w,x,y,z)=\neg(wxy)\oplus(xyz)$, then $T(x,x,x,x)=x\lor\neg x=\top$. (I use $\bot$ for false and $\top$ for true.) Suppose that each of the terms $t_1,t_2,t_3$, and $t_4$ is equivalent either to $x$ or to $\top$. Then $t_1t_2t_3$ is equivalent either to $x$ or to $\top$, as is $t_2t_3t_4$, and $T(t_1,t_2,t_3,t_4)$ is equivalent to one of $\neg x\oplus x=\top$, $\neg x\oplus\top=x$, $\bot\oplus x=x$, and $\bot\oplus\top=\top$. It follows by induction that every term built up from a single variable $x$ using $T$ is equivalent either to $x$ or to $\top$. In particular, $T$ cannot generate $\neg x$ and therefore is not universal.
10 boys plan a camp and have sufficient food for 14 days, if 3 boys are ill and cannot go, howlong will the food now last?
First, notice that $\frac{10}{14}$ is not food per child, but rather child per day. The important ratio here is not food per child, or day per child or anything like that. Its the amount of children before and after the illness. It is also important to note that number of children and days are inversely proportional. Meaning that if the number of boys goes up, number of days go down. So: $$\frac{days\ after\ illness}{days\ before\ illness}=\frac{boys\ before\ illness}{boys\ after \ illness}$$ $$\frac{days\ after\ illness}{14}=\frac{10}{7}$$ by $14$. Simply put, you want to scale the ratio $\frac{10}{7}$ $$days\ after\ illness=14\cdot\frac{10}{7}=20$$
Derive an Approximation for Random Walk Problem
$$\mathbb{P}(X_{2n} =0) = \mathrm{Binomial}\left(n;2n,\frac{1}{4}\right) = \binom{2n}{n}\left(\frac{3}{16}\right)^n$$ Using Stirling's formula $$ \binom{2n}{n} \approx \frac{4^n}{\sqrt{n\pi}}$$ So $$ \mathbb{P}(X_{2n} =0) \approx \frac{1}{\sqrt{n\pi}}\left(\frac{3}{4}\right)^n$$
What is the sum of the squares of the 10th roots of unity?
We have $q^n-1 =(q-1)(1+q+\ldots+q^{n-1})$. Thus if $\xi$ is a primitive $n$-th root of unity, $\xi^n=1$ and so $1+\xi+\ldots+\xi^{n-1}=0$ as required.
Identify p.m.f. with the probability generating function
Assuming that $X$ is a discrete random variable taking values from 0 to n then, from the definition of probability generating function (PGF), $G(z)=E(z^X)$, $$G(z)=E(z^X)=\sum_k^n z^k P(X=k)=P(X=0)+P(X=1)z+\dots +P(X=n)z^n$$ Thus the problem is to examine the terms which have the same exponents in the definition of $G(z)$ and the given PGF. e.g. for k=7, $G(z)$ contains $P(X=7)z^7$ which is given $\frac{z^7}{6}$ in the question. Thus $P(X=7)=\frac{1}{6}$. Since in the given PGF there is no $z^3$ it simply means that $P(X=3)=0$. By examining the other terms you can find the PMF of the random variable asked in the question.
Check series convergence
HINT Use Limit Comparison Test with $\sum_{n=1}^{\infty} \frac 1{n^2}$
Are the Complex Numbers Isomorphic to the Polynomials, mod $x^2+1$?
Let $\phi : \mathbb R[x] \to \mathbb C$ be defined by $\phi(P)=P(i)$. This can be proven to be ring/group morphism, which is onto and $\ker(\phi)=\langle x^2+1 \rangle$. Apply the first isomorphism theorem.
Maximum surface area of a cylinder inside a sphere solution and question about the solution
"... since it doesn't satisfy $(\star)$." The layout of the solution is a little cluttered, so it's not as easy to find the $(\star)$ as it ideally would be, but it occurs just two lines above the part you underlined: $$ x \sqrt{r^2 - x^2} = r^2 - 2x^2 \qquad(\star) $$ The area calculation has been set up so that $x$ is positive (the height of the cylinder must be positive, and is set at $2x$). The symbol $\sqrt\quad$ always denotes the positive square root (or zero). So the left-hand side of $(\star)$ is positive or zero. But then the right-hand side must also be positive or zero, so $$ r^2 - 2x^2 \geq 0. $$ It follows that $x^2 \leq \frac12 r^2 = \frac{5}{10} r^2 < \frac{5+\sqrt5}{10} r^2.$ In other words, $\frac{5+\sqrt5}{10} r^2$ is just too big to be $x^2$; if we try to set $x^2 = \frac{5+\sqrt5}{10} r^2,$ we'll violate the equation labeled with $(\star).$
A problem with Order-Dense set Definition
If $x\in Z$, you can just choose $z= x$. Similarly, if $y \in Z$, just take $z = y$. Thus your definition can be rephrased as follows: for all $x,y \in X \setminus Z$ such that $x > y$, there exists some $z \in Z$ with $x \geq z \geq y$, ... or $x > z > y$ since now the two conditions are equivalent.
A few questions on the different aspects of differentiation from $\mathbb{R}^2 \to \mathbb{R}$
Well done. If $f$ were differentiable at $(0,0)$, then $Df(0,0)=0$ (the null vector), since $\partial_1 f(0,0)=\partial_2 f(0,0)=0$. But you should verify easily that the limit $$ \lim_{(x,y) \to (0,0)} \frac{f(x,y)}{\sqrt{x^2+y^2}} $$ is not zero, and therefore $f$ is not differentiable at $(0,0)$. Finally, it is a theorem that the (existence and) continuity of partial derivatives around a point implies the differentiability at that point.
Condition for all eigenvalues to be less than 1 (proof)
The inequality holds automatically for $z=0$. For $z \neq 0$, $\operatorname{det}(I-Az)=z^m \operatorname{det}(z^{-1}I-A)$, which will be zero if and only if $z^{-1}$ is an eigenvalue of $A$. Now $\{ z^{-1} : |z| \leq 1,z \neq 0 \}=\{ z : |z| \geq 1 \}$, so the result follows.
Visualizing weird topology
Let $U$ be any set that’s open in the usual topology. If $p\in U$, there is an open interval $I_p$ such that $p\in I_p\subseteq U$, so it’s certainly true that $p\in I_p$ and $I_p\cap\Bbb Q\subseteq U$. Thus, every subset of $\Bbb R$ that’s open in the usual topology is still open in $T$. Again let $U$ be any set that’s open in the usual topology, and let $A$ be any set of irrational numbers. Let $V=U\setminus A$, and suppose that $p\in V$. Then $p\in U$, so there is an open interval $I_p$ such that $p\in I_p\subseteq U$. Finally, $A\cap\Bbb Q=\varnothing$, so $p\in I_p$, and $$I_p\cap \Bbb Q\subseteq U\cap\Bbb Q\subseteq U\setminus A=V\;,$$ so $V\in T$. The members of $T$ are simply the sets of the form $U\setminus A$, where $U$ is open in the usual topology, and $A$ is any set of irrational numbers. In particular, for any $x\in\Bbb R$ the sets of the form $$\{x\}\cup\Big((x-\epsilon,x)\cap\Bbb Q\Big)\cup\Big((x,x+\epsilon)\cap\Bbb Q\Big)$$ with $\epsilon>0$ form a local base at $x$ in the space $\langle\Bbb R,T\rangle$.
Least squares ANOVA error
The expression $A'$ denotes the transpose of the matrix $A$. Thus $$ \varepsilon'\varepsilon = \begin{bmatrix} \varepsilon_1 & \cdots & \varepsilon_n \end{bmatrix} \begin{bmatrix} \varepsilon_1 \\ \vdots \\ \varepsilon_n \end{bmatrix} = \sum_{i=1}^n \varepsilon_i^2 = \text{sum of squares of errors}. \tag 1 $$ The $\text{“}{\sum}\text{''}$ in what you posted should not be there. Conventionally one denotes the least-squares estimates of $\beta$ by $\widehat\beta$, and then one has: \begin{align} \varepsilon & = Y - X\beta = \text{the vector of errors} \in \mathbb R^{n\times 1}, \\[8pt] \widehat\varepsilon & = Y - X\widehat\beta = \text{the vector of residuals} \in \mathbb R^{n\times 1}. \end{align} The notation in what you posted does not properly distinguish between errors and residuals. Notice that the errors may be uncorrelated and homoscedastic, but the residuals are then correlated since the vector of residuals is constrained to be orthogonal to every column of the design matrix $X$. The thing that gets minimized in least-squares estimation is not the sum of squares of errors in $(1)$ above, but rather the sum of squares of residuals: $$ \sum_{i=1}^n \widehat\varepsilon_i^2 = \text{sum of squares of residuals}. $$
Optimal coefficient in Cauchy-Schwartz inequality?
If $h$ is not equal to zero almost everywhere, then $\alpha(h)=1$. First let us restrict ourselves to $f,g \in L^2(\mathbb{R})$. Otherwise we run into troubles with multiplying zero and infinity. I am assuming that we are talking about the Lebesgue measure with the usual Lebesgue sigma-algebra. Case 1: h equal to zero almost everywhere. Let's take care of the trivial case. If $h$ is almost everywhere equal to zero, then we have $f=h$ a.e. and $g$ some a.e. positive $L^2(\mathbb{R})$ function. Then we have $$ \vert (f, g) \vert = 0 $$ As also $\Vert f \Vert=0$, we can pick $\alpha(h)$ whatever we like (even negative constants). Case 2: if h is not equal to zero almost everywhere. Let $h: \mathbb{R} \rightarrow \mathbb{R}$ be a measurable function such that $h$ is not almost everywhere equal to zero. First we note that there always exist admissible $L^2(\mathbb{R})$ functions such that $h=f/g$, for example if $H\in L^2(\mathbb{R})$ positive, then $$ f= \left(\frac{1_{\{\vert h \vert \geq 1\}}}{h} + 1_{\{\vert h\vert <1\}} h \right) H, \qquad g= \left(\frac{1_{\{\vert h \vert \geq 1\}}}{h^2} + 1_{\{\vert h\vert <1\}} \right) H $$ do the job. We have $$ \alpha(h) = \sup_{f,g\in L^2(\mathbb{R}): h=f/g} \frac{\vert (f,g) \vert}{\Vert f \Vert \cdot \Vert g\Vert}.$$ Note that $\Vert f \Vert, \Vert g \Vert \neq 0$ as $h$ is not equal zero almost everywhere. Furthermore, by Cauchy-Schwarz we know that $\alpha(h) \leq 1$. Hence, we just need to show that $\alpha(h)\geq 1$. Handwaving outline: Our strategy is the following. We can replace $f,g$ by $1_A f, 1_A g$. Then we will pick $A$ to be roughly a level set of $h$. In that case we have that $1_Af, 1_A g$ are almost scalar multiples of each other. Then we get that the quotient is roughly one. Actual proof: In general we have for $C>0$ and $A\subseteq \mathbb{R}$ measurable set not being a null set, that if $f,g$ are admissible, then so are $1_A f + \frac{1}{C} 1_{A^c} f $, $1_A g + \frac{1}{C} 1_{A^c} g $. Hence, if we let $C$ go to infinity, we get for every $A\subseteq \mathbb{R}$ measurable set such that $\{ h \neq 0 \}\cap A$ is not a null set (and thus $\Vert 1_A f \Vert, \Vert 1_A g\Vert \neq 0$) $$ \alpha(h) \geq \sup_{f,g\in L^2(\mathbb{R}): h=f/g} \frac{\vert (1_A f,1_A g) \vert}{\Vert 1_A f \Vert \cdot \Vert 1_A g\Vert}. $$ Now we want to make $A$ almost a level set of $h$ such that $1_A f$ and $1_A g$ are almost scalar multiples of each other. First we note that $\alpha(h) = \alpha(-h)$, hence, we wlog of generality we may assume that $\{ h>0\}$ is not a null set. Thus, we can pick $R>0$ such that $\{h>0 \}\cap [-R,R]$ is not a null set. For every $\varepsilon >0$ there exists $\xi(\varepsilon)>0$ such that $$A_\varepsilon := \{h>0 \} \cap h^{-1}([\xi(\varepsilon) -\varepsilon, \xi(\varepsilon) + \varepsilon]) \cap [-R, R]$$ is not a null set. By subdividing the interval $[\xi(\varepsilon) -\varepsilon, \xi(\varepsilon) + \varepsilon]$ we find a sequence $(\xi_{n_l})_{l \geq 1}$ ($n_l < n_{l+1}$) such that $\xi_{n_l}\geq 1/\sqrt{n_l}$ and such that $$ B_{n_l} := h^{-1}([\xi_{n_l} -1/n_l, \xi_{n_l} + 1/n_l]) \cap [-R, R] $$ is not a null set. Now, if we pick $A=B_{n_l}$ and $f=h$, $g=1$ on $B_{n_1}$ (this we are allowed to do as $B_{n_1}$ has finite measure as it is a subset of $[-R,R]$ and $h$ is bounded on that set) we get $$ \alpha(h) \geq \frac{\vert (1_{B_{n_l}} h,1_{B_{n_l}} 1) \vert}{\Vert 1_{B_{n_l}} h \Vert \cdot \Vert 1_{B_{n_l}} g\Vert} \geq \frac{(\xi_{n_l} - 1/n_l) \Vert 1_{B_{n_l}} \Vert^2}{(\xi_{n_l} + 1/n_l) \Vert 1_{B_{n_l}} \Vert^2} = \frac{(\xi_{n_l} - 1/n_l)}{(\xi_{n_l} + 1/n_l)} \geq \frac{(\xi_{n_l} - 1/n_l)}{\xi_{n_l}} \geq \frac{(1-1/\sqrt{n_l}) \xi_{n_l}}{\xi_{n_l}}= (1-1/\sqrt{n_l}).$$ Let $l$ go to infinity, then we obtain $$ \alpha(h) \geq 1.$$ Added: It was pointed out in the comments that it is not obvious that we can choose $\xi_{n_l}\geq 1/\sqrt{n_l}$. Let me elaborate on that. We know that there exists some $L>0$ such that $$ A := \{h>0 \} \cap h^{-1}((0,L]) \cap [-R, R] $$ has positive measure. Writing $$ A = \bigcup_{n\geq 1} \left( \{h>0 \} \cap h^{-1}((L/2^{n},L/2^{n-1}]) \cap [-R, R] \right) $$ we get that there exists $\xi >0$ such that $$ \{h>0 \} \cap h^{-1}([\xi,L]) \cap [-R, R] $$ has positive measure. Now pick $n_1$ such that $\xi \geq 1/\sqrt{n_1}$. We have for $N\geq n_1$ $$ \{h>0 \} \cap h^{-1}([\xi,L]) \cap [-R, R] = \bigcup_{j=0}^{N^2-1}h^{-1}([\xi + j\frac{L-\xi}{N^2}, \xi + (j+1) \frac{L-\xi}{N^2} ])\cap [-R, R] $$ Then one of those sets will have positive measure and the length of the interval is $\frac{L-\xi}{N^2}$ and the midpoints of the intervals are greater or equal to $\xi$. Hence, we in particularly pick $\xi_n$ in the way I wrote above (in fact I can make the length of the intervals to decay as fast as I want and the inequality of the midpoints versus the length could be taken as good as you would like).
Another exercise from Fleming's Functions of Several Variables.
I forgot that $\Phi^l(\mathbf{x})=0$, for every $\mathbf{x} \in U$. Hence, $\{d\Psi^l(\mathbf{x})=g^l(\mathbf{x})\cdot d\Phi^l(\mathbf{x}):l=1,\dots,n-r\}$ is a set of linear independent vectors, since linear transformations preserve linear independence.
Probability with intersecting normal distributions
This is easy if you look for the probability distribution of $a-b$. As $a \sim \mathcal{N}(\mu_a, \sigma_a^2)$ and $b\sim \mathcal{N}(\mu_b, \sigma_b^2)$ are independant, you have that $$a-b = Z \sim \mathcal{N}(\mu_a-\mu_b, \sigma_a^2+\sigma_b^2)$$ Then you can calculate $P(Z<0)$ as usual
Problem in Differential Equation (How to proceed?)
Square both sides: $$1-y^2=(x+c_1)^2$$ rearrange for $y^2$: $$y^2=1-(x+c_1)^2$$ take $\pm$ square root: $$y=\pm\sqrt{1-(x+c_1)^2}$$
How can I compute this sum?
Note for integer $m$ we have $$\sum_{k=0}^{m-1}\frac1{m-k}=\sum_{j=1}^m\frac1j=H_m\sim\ln m+\gamma,$$ where $H_m$ denotes the $m$th Harmonic number and $\gamma$ the Euler-Mascheroni constant.
$(a+b)^\beta \leq a^\beta +b^\beta$ for $a,b\geq0$ and $0\leq\beta\leq1$
When $a=b=0$, the result is obvious. Assume that $(a,b)\ne(0,0)$. Then, $t=a/(a+b)$ is in $[0,1]$ hence $t^{\beta}\geqslant t$ because $\beta\leqslant1$. Likewise, $1-t=b/(a+b)$ is in $[0,1]$ hence $(1-t)^{\beta}\geqslant 1-t$. Summing these two inequalities yields $t^{\beta}+(1-t)^{\beta}\geqslant 1$, which is your result.
Proof by induction for tricky double summation: $(\sum_{k=1}^n x_k)\cdot(\sum_{k=1}^n \frac{1}{x_k})\ge{n^2}$
The inductive step should be $$\left(\sum_{i=1}^n x_i\right)\left(\sum_{i=1}^n \frac{1}{x_i}\right)$$ $$=\left(\sum_{i=1}^{n-1} x_i + x_n\right)\left(\sum_{i=1}^{n-1} \frac{1}{x_i} + \frac{1}{x_n}\right)$$ $$=\left(\sum_{i=1}^{n-1} x_i\right)\left(\sum_{i=1}^{n-1} \frac{1}{x_i}\right) + \frac{1}{x_n}\left(\sum_{i=1}^{n-1} x_i\right) + x_n\left(\sum_{i=1}^{n-1}\frac{1}{x_i}\right) + 1$$ Now $$\left(\sum_{i=1}^{n-1} x_i\right)\left(\sum_{i=1}^{n-1} \frac{1}{x_i}\right) \geq (n-1)^2$$ and $$\frac{1}{x_n}\left(\sum_{i=1}^{n-1} x_i\right) + x_n\left(\sum_{i=1}^{n-1} \frac{1}{x_i}\right) = \sum_{i=1}^{n-1}\left(\frac{x_i}{x_n} + \frac{x_n}{x_i}\right) \geq 2(n-1)$$ so that $$\left(\sum_{i=1}^{n-1} x_i\right)\left(\sum_{i=1}^{n-1} \frac{1}{x_i}\right) + \frac{1}{x_n}\left(\sum_{i=1}^{n-1} x_i\right) + x_n\left(\sum_{i=1}^{n-1}\frac{1}{x_i}\right) + 1 \geq (n-1)^2 + 2(n-1) + 1 = ((n-1) + 1)^2 = n^2$$
If a linear transformation $T$ has $z^n$ as the minimal polynomial, there is a vector $v$ such that $v, Tv,\dots, T^{n-1}v$ are linearly independent
Since $z^n$ is the minimal polynomial, we have $T^{n-1}\neq 0$ and hence $\exists v\in V$ such that $T^{n-1}v\neq 0$. Suppose $$a_0v+a_1Tv+\cdots +a_{n-1}T^{n-1}v=0.$$ Applying $T^{n-1}$ on both sides yields $a_0T^{n-1}v=0$ and hence $a_0=0$. Now, $$a_1Tv+a_2T^2v+\cdots +a_{n-1}T^{n-1}v=0$$ and applying $T^{n-2}$ gives $a_1=0$. Now, you should be able to see how to use induction.
Finding Cartesian equation of geometric shapes.
I can't comment right now due to lack of reputation, and this shouldn't be a full answer.. But if we're talking vectors in two space, obviously you can make triangles and squares, whatever you want. This is what vector geometry in two space is for. For instance, when we are finding the result of two vectors we are drawing a parallelogram like we would if we were solving for a side on a non-right angled triangle. This is what the Cartesian plane is for. If I have three vectors, OA is (3,0) units, OB is (0,5) and we use vector addition to draw a third vector, what is the shape that is made? It is important to have this understanding of the Cartesian plane early. Now, you might be confused and be thinking of graphing shapes using polynomials. In which case it's certainly not impossible. You can make a circle using a polynomial equation, I've never personally tried to make a triangle or square... Edit: Unfortunately I fail to understand the question. Sorry.
Promises in the hidden subgroup formulation of graph isomorphism problem
First of all, $K$ is the symmetry group of $C$, not "a" "symmetric" group. Second of all, just because the letter $H$ is used in the statement of the hidden subgroup problem and the letter $H$ is also used for one of the groups in the graph isomorphism problem setup, doesn't mean that the latter is functioning as the former. Indeed, we're supposed to turn the isomorphism problem into a hidden subgroup problem, so the problem needs to depend on what graphs we have, but $H$ does not at all depend on the graphs $A$ and $B$, it only depends on their size $n$. Furthermore, $H$ is already known right from the get-go, whilst the promised subgroup in a hidden subgroup problem is a mystery that we need to find out. That's $K$. Here, we have A subgroup $K\le {\cal P}_{2n}\times{\cal P}_{2n}$ (namely, the symmetry group of $C$: the set of all permutations of the vertex set which leaves the edge set the same). A function $f:{\cal P}_{2n}\times{\cal P}_{2n}\to M_{2n\times 2n}$ which sends a permutation of all of the vertices of $C$ to the adjacency matrix of the new graph obtained by keeping the same vertex set but applying the function $f$ to the endpoints of each edge, thereby moving the edges around. The function $f$ is constant on cosets of $K$, and distinct on distinct cosets. (Are you able to prove this fact? Argue $f(\alpha)=f(\beta)$ if and only if $\alpha\beta^{-1}\in K$.) This is the setup of the hidden subgroup problem. By solving it, we find $K$. After we have $K$, we merely need to check if $K\cap\sigma H$ is empty or not: if it's empty, then $A\not\cong B$ are nonisomorphic, whereas if it's not empty, there is an isomorphism $A\cong B$ induced from an element of $K\cap\sigma H$, and this solves the graph isomorphism problem. (As the arxiv paper you linked last time states, we technically don't need to fully solve for $K$ here, only figure out of $K\cap\sigma H$ is empty or not.)
Number of ways of distributing 6 objects to 6 persons
Total number of ways to distribute $6$ objects among $6$ people such that each person gets exactly one object is $6!$ Proof: A particular object has $6$ choices, i.e. $6$ people to select $1$ from. Similarly, every object has $6$ choices, so total ways are $6^6$. You might have selected a person and calculated choices for him: $6$ for $6$ objects and $1$ when he does not get any object. This gave you $7^6$, which is incorrect. So, total ways (without restriction) are $6^6$. And, number of ways when each person gets $1$ object is $6!$, which you got. So, required number of ways are $$6^6-6!$$ giving you a probability of $$\frac{6^6-6!}{6^6}$$
$\int_{\Omega}|fg|d \mu \leq ||f||_2||g||_2$
Case $1$: For any $h \in \mathcal{L}^2(\Omega, \mu)$, if $\|h\|_2=0$ then $\int |h|^2 d\mu =0$, so $|h| = 0$ $a.e.$, so $h=0$ $a.e.$. So we have that, if $\|f\|_2=0$ or $\|g\|_2=0$ then $f=0$ $a.e.$ or $g=0$ $a.e.$. So $fg=0$ $a.e.$, so $| fg| =0$ $a.e.$, $$\int_{\Omega}|fg|d \mu =0 \leq 0=\|f\|_2\|g\|_2$$ Case $2$: If $\|f\|_2=1 = \|g\|_2$, the we have $\int |f|^2 d\mu =1$ and $\int |g|^2 d\mu =1$. Note that $(|f|-|g|)^2 \geq 0$. So $$\int_\Omega (|f|-|g|)^2 d\mu \geq 0$$ So, we have, $$\int_\Omega |f|^2 d\mu + \int_\Omega |g|^2 d\mu -2 \int_\Omega |fg|d\mu = \int_\Omega (|f|-|g|)^2 d\mu \geq 0$$ Since $\int |f|^2 d\mu =1$ and $\int |g|^2 d\mu =1$, we have $$2 -2 \int_\Omega |fg|d\mu \geq 0$$ So $$\int_\Omega |fg|d\mu \leq 1 = \|f\|_2 \|g\|_2$$ Case $3$: If $\|f\|_2 \neq 0$ and $\|g\|_2 \neq 0$. Then let $\hat{f} =\frac{f}{\|f\|_2}$ and $\hat{g} =\frac{g}{\|g\|_2}$. Clearly, $\hat{f}, \hat{g} \in \mathcal{L}^2(\Omega, \mu)$ and $\|\hat{f}\|_2=1 = \|\hat{g}\|_2$. So, by Case $2$, we have $$\int_\Omega |\hat{f}\hat{g}|d\mu \leq 1 = \|\hat{f}\|_2 \|{g}\|_2$$ But, since $\hat{f} =\frac{f}{\|f\|_2}$ and $\hat{g} =\frac{g}{\|g\|_2}$, we have $$\frac{1}{\|f\|_2\|g\|_2 }\int_\Omega |fg|d\mu=\int_\Omega |\hat{f}\hat{g}|d\mu \leq 1 $$ So $$\int_\Omega |fg|d\mu \leq \|f\|_2 \|g\|_2$$ Remark: From Case $1$ and Case $3$ we have that $$\int_\Omega |fg|d\mu \leq \|f\|_2 \|g\|_2$$ is valid for all $f, g \in \mathcal{L}^2(\Omega, \mu)$.
Strange monomorphism of sheaves
The inclusion $i \colon \mathsf{Sh}(\mathcal C,J) \hookrightarrow \hat{\mathcal C}$ has a left adjoint, namely the sheafification functor, hence preserves monomorphisms.
Is there a perfect group which is a finite extension of the discrete Heisenberg group $H_3(\Bbb Z)$?
No. The main reason is that $\mathrm{GL}_2(\mathbf{Z})$ has no nontrivial finite perfect subgroup (its finite subgroups have order $\le 12$). Hence assume by contradiction that $G$ is such a group. The action on the abelianization of $H_3$ yields a homomorphism $G\to\mathbf{GL}_2(\mathbf{Z})$. Hence it has a trivial image. Hence using the commutator map, the action on the center of $H_3$ is also trivial, and hence the action on $H_3$ is by automorphisms that are trivial both on the abelianization and on the derived subgroups, and these form an abelian group. Again using that $G$ is perfect, we deduce that its action by conjugation on $H_3$ is trivial. So $H_3$ is central. Since $H_3$ is non-abelian, this is a contradiction.
Prove that $\mu(\bigcup_{i=1}^{\infty}\bigcap_{n=i}^{\infty}B_{n}) \le \liminf_{n\to \infty} \mu (B_{n})$
We have using property that if $\{A_n\}$ is an increasing sequence of measurable sets, $\mu(A_n)\to \mu\left(\bigcup\limits_{j=1}^{+\infty}A_j\right)$. Hence $$\mu\left(\bigcup_{i=1}^{+\infty}\bigcap_{k=i}^{+\infty}B_k\right)=\lim_{n\to +\infty}\mu\left(\bigcup_{i=1}^n\bigcap_{k=i}^{+\infty}B_k\right)=\lim_{n\to +\infty}\mu\left(\bigcap_{k\geq n}B_k\right). $$ Now, use the fact that $\bigcap\limits_{k\geq n}B_k\subset B_j$ for any $j\geq k$ to get the result. Note that this inequality doesn't need to be an equality, for example with $\{0,1\}$ with counting measure, $B_{2n}=\{0\}$, $B_{2n+1}=\{1\}$. Then $\liminf_n B_n=\emptyset$ but $\liminf_n\mu(B_n)=1$.
The content of a polynomial vs the ideal of its values
As you correctly observed, the condition which is necessary and sufficient is that the cardinality of every residue field of $A$ at maximal ideals is greater than $n=\deg f$. Your example gives the necessity. To prove sufficiency, first, we may go modulo $\nu(f)$ and then we want to show that $c(f)=0$. Thus, we may assume $f(a)=0$ for all $a\in A$. Notice that the constant term of $f$ must therefore be zero, since it is $f(0)$. So, we may write $f(x)=xg(x)$ with $\deg g=n-1$. We show that $c(f)=0$ when we localize at any maximal ideal which will prove what we need. Given any maximal ideal $\mathfrak{m}$, by our hypothesis, we can find $t_1,\ldots, t_n\in A$, so that their images in $A/\mathfrak{m}$ are non-zero and distinct. Then, $g(t_i)=0$ in $A_{\mathfrak{m}}$ for all $i$. Using Van der Monde determinant, you can see that all the coefficients of $g$ must be zero in $A_{\mathfrak{m}}$.
What is the limit of the series (summation) of the q-Pochhammer symbol or the ~q-Pochhammer symbol?
I will take this opportunity to share some of the cooler things about elementary symmetric polynomials which seem to apply in this case. Let's forgo the considerations of convergence for the moment, and consider these products symbolically as polynomial expansions in some auxiliary $x$. For your third expression, suppose more generally that we are looking at products of the form $$E_n(x) := \prod_{1 \leq i \leq n} \left(1 - e^{f(i)} x\right).$$ Then we can expand these products for $x_j := e^{f(j)}$ as follows: $$E_n(x) = \sum_{k=0}^n (-1)^k e_k(x_1,x_2,\ldots,x_n) x^k.$$ Now if we define the $k^{th}$ power sum symmetric polynomial by $$p_k(x) := \sum_{i=1}^x x_i^k = \sum_{i=1}^x e^{k \cdot f(i)},$$ we typically can get expansions for the limiting cases of these products as $$\lim_{n \rightarrow \infty} E_n(x) = \exp\left(\sum_{k \geq 1} \frac{(-1)^{k+1}}{k} p_k(\infty) x^k\right),$$ formally at least, and when $x \mapsto 1$ provided suitable convergence conditions on the infinite products. Now let's examine a slightly more general form of your first two series expressions (again, without immediate considerations of convergence for right now): $$F_N(x) := \sum_{n \leq N} E_n(x) = N + \sum_{k=1}^{N} (N+1-k) (-1)^k e_k \cdot x^k.$$ Thus to evaluate these classes of series, we need to examine the limiting cases of the following equations as $N \rightarrow \infty$: $$\lim_{N \rightarrow \infty} \left[N + (N+1) (E_{N}(1)-1) + [w^N] \left(\sum_{i \geq 1} p_i(\infty) w^i\right) \exp\left(-\sum_{k \geq 1} p_k(\infty) \frac{w^k}{k}\right)\right].$$ This is not a full solution as the corresponding cases of the power sum polynomials, $p_k = \sum_{i \geq 1} e^{k \cdot f(i)}$, are difficult to sum in closed-form, but it should allow you to make some simplifications for special cases of the $f(i)$. For example, I was able to approximate some of these series in the special case of $f(i) := -i$ by making some asymptotic assumptions.
Trigonometry problem cosine identity
Using \begin{align}&2\cos^2 x=\cos 2x +1, \\&4\cos^3x=\cos 3x+3\cos x, \\ \text{and}&2\cos a \cos b= \cos(a+b)+\cos(a-b),\end{align} we obtain, \begin{align}&\cos^6 x=(\cos^3 x)^2\\ \\=&\left(\dfrac{\cos 3x+3\cos x}{4}\right)^2\\ \\=&\dfrac1{16}(\cos^2 3x + 9\cos^2 x+6\cos x\cos 3x)\\ \\=&\dfrac{1}{16}\left(\dfrac{\cos 6x+1}{2}+9\dfrac{\cos 2x +1}2+3\cos 4x +3\cos 2x\right).\end{align} It is seen that the constant term is $\dfrac{5}{16}$ or $\dfrac{10}{32}$. (This of course, makes use of the question's assumption that the number $a_0$ is unique.)
Why must $A\subset B$ and $A \cap C = \varnothing$ for $A \cap (B-C) = A$
Draw a diagram. Here's a Venn diagram with $B\setminus C$ marked in green: And then here's one with $A\cap(B\setminus C)$ marked in red: If we're given that $A\cap(B\setminus C)$ equals $A$, then every element in $A$ must be in the red area. This area is completely outside $C$, so $A$ and $C$ must be disjoint, but it is completely inside $B$, so $A$ must be a subset of $B$. $A$ doesn't have to be a strict subset of $B$, though -- it is possible under your assumptions that $A$ and $B$ are the same set.
Show that $T(JX,JY)+T(X,Y)=-\frac{1}{2}N_{J}(X,Y)$
Where did you found that relation? What you can prove (strongly relateded to almost complex structures on manifolds) is the following: $\textbf{Claim}$: Let $(M,g,\nabla)$ be a Riemannian manifold endowded with a connection, and $J\in T^1_1(M)$ s.t. $J \circ J= -id_{T^1_1(M)}$ and $J$ compatible with the connection (i.e. $\nabla_X JY=J \nabla_X Y$), then \begin{equation} N_J(X,Y)= T_{\nabla}(X,Y) + JT_{\nabla}(JX,Y) +JT_{\nabla}(X,JY) - T_{\nabla}(JX,JY) \end{equation} Proof. Expanding the RHS \begin{align*} RHS =&\nabla_X Y - \nabla_YX -[X,Y]+ J \nabla_{JX} Y - J\nabla_Y JX -J[JX,Y]+ \\&+J\nabla_{X} JY - J\nabla_{JY} X -J[X,JY]- \nabla_{JX} JY + \nabla_{JY} JX +[JX,JY]\\ =&\nabla_X Y - \nabla_YX -[X,Y]+ J \nabla_{JX} Y +\nabla_Y X -J[JX,Y]+ \\&-\nabla_{X} Y - J\nabla_{JY} X -J[X,JY]- J\nabla_{JX} Y + J\nabla_{JY} X +[JX,JY]\\ =&\ -[X,Y] -J[JX,Y] -J[X,JY] +[JX,JY]\\ =& J^2[X,Y] -J[JX,Y] -J[X,JY] +[JX,JY]=N_J(X,Y) \end{align*} Not sure you can simplify the expression by some extra requirements to what you wrote down.
Associated matrix with respect of a basis
Your matrix $$ \begin{bmatrix}1&0\\1&1\\0&1\end{bmatrix} $$ represent the function $f$ if we use the standard basis $R_3=((1,0,0);(0,1,0);(0,0,1))$ of $\mathbb{R}^3$. If you use the basis $R_3^1=((1,0,0);(1,1,0);(0,1,1))$ note that $(1,0)$ become the second element of this new basis, and $(0,1)$ become the third element, so the matrix become: $$ \begin{bmatrix}0&0\\1&0\\0&1\end{bmatrix} $$
Covariance of Sum and Differences of Dice Values
$Cov(Y, Z)=Cov(X_1+X_2,\,4X_1-X_2)$ $=4Cov(X_1,X_1) -Cov(X_1, X_2)+4Cov(X_2, X_1)-Cov(X_2, X_2)\,\,by\, linearity$ $=4Var(X_1)+3Cov(X_2, X_1)-Var(X_2)$ $=4Var(X_1)+0-Var(X_2) \,\,\,since\, X_1 \,and\, X_2\, are\, independent$
Independent conditional probabilities are multiplied?
A simple proof of the base case uses the fact that $$P(B|A)=\frac{P(B\cap A)}{P(A)} \implies P(B \cap A)=P(B|A)P(A)$$ Now, we assume that $A$ and $B$ are independent events, so $P(B|A)=P(B)$ since $B$ does not depend on $A$ at all. You can test this by, say, flipping a coin. What is the probability that your second flip is tails given your first flip is heads? Therefore, $P(B \cap A)=P(B|A)P(A)=P(B)P(A)$ for independent events. This can be generalized: $$P(A_1 \cap A_2 \cap ...\cap A_n)=P(A_n|A_1\cap A_2\cap...\cap A_{n-1})...P(A_3|A_2\cap A_1)P(A_2|A_1)P(A_1)$$ $$=P(A_n)...P(A_3)P(A_2)P(A_1)$$ This is all because of the relationship between the intersection of probabilities and the conditional probability. Anytime there is a conditional probability with independent events, you can ignore the right side of the condition. Intuitively, this makes sense when you consider the coin flips. What is the probability that we get heads, then tails? Let's call that condition $HT$. Our state space is equal to $\{TT,TH,HT,HH\}$, all with equal likelihood of happening. $$P(HT)=P(H|T)P(T)=P(H)P(T)=\frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}$$ The same for three flips: Our state space is $\{TTT,TTH,THT,THH,HTT,HTH,HHT,HHH\}$, and $$P(HHT)=P(H|HT)P(H|T)P(T)=P(H)P(H)P(T)=\frac{1}{2}\cdot\frac{1}{2}\cdot\frac{1}{2}=\frac{1}{8}$$
Figuring domain of constant $a$ in a equation with some condition
If $a^2 $ is not integral then $x$ must not be integral. So assume that $a^2$ is nonnegative integer. If $$x=n+\alpha,\ 0\leq \alpha <1,\ n\in \mathbb{Z}$$ then $$ -3 \alpha^2+2\alpha +a^2=0 $$ So $$ \alpha = \frac{- 1\pm \sqrt{ 1+3a^2 }}{-3} \Rightarrow \alpha = \frac{1+\sqrt{1+3a^2}}{3}\ {\text or}\ a=0,\ \alpha=0 $$ So $a^2< 1$. So $a=0,\ \alpha=2/3$ But as we have shown that if $a=0$ then there exists integral solution. So if $a^2$ is not nonneagtive integer, the equation has no integral solution. So (b).
propositional logic entailment proof
Note: $$A\Leftrightarrow B \qquad\equiv\qquad (A\wedge B) \vee (\neg A\wedge\neg B)$$ $$A\nLeftrightarrow B \qquad\equiv\qquad (A\wedge \neg B) \vee (\neg A\wedge B)$$ What you are to prove is a tautology is: $$\Sigma \vDash\varphi \iff (\phi_1, \ldots,\phi_n)\to\varphi$$ This is not (necessarily): $\Sigma \vDash\varphi \wedge (\phi_1, \ldots,\phi_n)\to\varphi$
Łoś's Theorem holds for positive sentences at reduced products in general?
This is a surprisingly subtle issue. My previous answer was incorrect, and I've deleted it. Let's start by fixing terminology. A positive sentence is built up from (positive) atomic formulas using $\land$, $\lor$, $\forall$, and $\exists$ (but not $\lnot$). A basic Horn formula is of the form $\psi_1\land\dots\land\psi_n \rightarrow \theta$, where the $\psi_i$ and $\theta$ are (positive) atomic formulas. A Horn sentence is built up from basic Horn formulas using $\land$, $\forall$, and $\exists$ (but not $\lnot$ or $\lor$). Let $I$ be an infinite set indexing a collection of structures $\langle A_i\rangle_{i\in I}$, $D$ a proper filter on $I$, and $A = \Pi_DA_i$ the reduced product. Say a sentence $\phi$ is weakly preserved under reduced product if $\{i\,|\,A_i\models\phi\} = I$ implies $A\models \phi$. Say a sentence $\phi$ is strongly preserved under reduced product if $\{i\,|\,A_i\models\phi\}\in D$ implies $A\models \phi$. Say a sentence $\phi$ is preserved under reduced factors if $A\models\phi$ implies $\{i\,|\,A_i\models\phi\} \in D$. Your question asked whether Los's theorem holds for positive sentences in reduced products, i.e. whether every positive sentence is strongly preserved under reduced product and preserved under reduced factors. In fact, positive sentences are preserved under reduced factors, but not necessarily under reduced product (strongly or weakly). Surprisingly, the problem occurs with the $\lor$ case. Here's an example. Let $L = \{c, P,Q\}$, where $c$ is a constant and $P$ and $Q$ are both unary relation symbols. Define $\langle A_i\rangle_{i\in\mathbb{N}}$ as follows: all $A_i$ consist of just one element, $c$. If $i$ is even, $A_i\models P(c)\land \lnot Q(c)$, but if $i$ is odd, $A_i\models \lnot P(c)\land Q(c)$. Now let $D$ be the cofinite filter on $\mathbb{N}$. Then the reduced product $A$ consists of just one element, $c$, and $A\models \lnot P(c) \land \lnot Q(c)$. So the sentence $P(c)\lor Q(c)$ is false in $A$, despite holding in all of the $A_i$. Okay, so $\lor$ isn't preserved under reduced products, but it turns out that the other operations are okay (in particular, both quantifiers are). In fact, we can expand our basic building blocks up from atomics to basic Horn sentences and still get the strong preservation under reduced product. Here's a summary of what's known, as far as I know (most can be found in C+K Section 6.2, recently reprinted by Dover!) Horn sentences are strongly preserved under reduced product. Every sentence which is weakly preserved under reduced product is equivalent to a Horn sentence. Together, since strongly preserved under reduced product implies weakly preserved under reduced product, these imply that "weakly preserved under reduced product" "strongly preserved under reduced product" and "equivalent to a Horn sentence" are all equivalent notions. Every positive sentence is preserved under reduced factors. The converse is not true: there are sentences preserved under reduced factors which are not equivalent to positive sentences. Notice that Los's theorem is exactly "strongly preserved under reduced product" + "preserved under reduced factors". So Los's theorem holds for sentences which are equivalent to both a positive sentence and a Horn sentence. This includes all sentences built up from (positive) atomics by $\land$, $\forall$, and $\exists$. But it may also hold for a larger class of sentences.
Homology of $n$-sheeted covering space of Klein bottle
Letting $\chi$ denote Euler characteristic, we have $\chi(Y) = n \cdot \chi(X)=0$. So $Y$ is a compact surface of Euler characteristic zero. I'll add the assumption that $Y$ is connected, with a few words on the disconnected case afterwards. By the classification of surfaces, $Y$ is either a torus or a Klein bottle, and we just have to figure out which one. I'll prove that $Y$ is a Klein bottle, whose homology one calculates to be $$H_0(Y;\mathbb{Z})=\mathbb{Z}, \,\, H_1(Y;\mathbb{Z}) = \mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}, \,\, H_2(Y;\mathbb{Z}) = 0 $$ The group $G$ acts on $\mathbb{R}^2$ by deck transformations with quotient $X$. Some elements of $G$ act preserving orientation of $\mathbb{R}^2$, and some act reversing orientation. Since the quotient $X$ is nonorientable, the orientation preserving elements form a normal subgroup $N < G$ of index $2$, in fact there is a surjective homomorphism $G \mapsto \mathbb{Z}/2\mathbb{Z}$ whose kernel is $N$. By the covering space theory, there is an index $n$ subgroup $H < G$ such that the universal covering map $\mathbb{R}^2 \mapsto X$ lifts to a universal covering map $\mathbb{R}^2 \mapsto Y$ with deck transformation group $H$. The quotient $Y = \mathbb{R}^2/H$ is a torus if and only if $H$ has no orientation reversing elements, and it is a Klein bottle otherwise. So it remains to decide whether $H$ has any orientation reversing elements. Restrict the homomorphism $G \mapsto \mathbb{Z}/2\mathbb{Z}$ to the subgroup $H$. If $H$ has no orientation reversing elements then this restriction is the trivial homomorphism and so $H < N$, implying the following equation of subgroup indices: $$n = [G:H] = [G:N] \, [N:H] = 2 \, [N:H] $$ and so $n$ is even, a contradiction. It follows that $H$ contains orientation reversing elements, so $Y$ is a Klein bottle. Now a few words on the case when $Y$ is disconnected. By the same Euler characteristic argument, each component of $Y$ is a torus or Klein bottle. The degrees of the restrictions to the components partition the total degree $n$ as a sum of positive integers. Since $n$ can be partitioned in more than one way, there is not a unique solution to the problem in the disconnected case. Nonetheless one can still enumerate the possibilities, if that is desired: all of the components that cover $X$ with odd degree are Klein bottles, by the above argument; the remaining components, that cover $X$ with even degree, can be either tori or Klein bottles (each of which have a covering map over the Klein bottle of any given even degree).
Propositional Calculus: An algorithm to determine whether a finite sequence belongs to $\mathcal{L_0}$
If you consult the book Introduction to Metamathematics by S. C. Kleene, you can find an algorithm for a distinct infix language which might give you some ideas here. In Polish notation there exist several known algorithms. One of them is the following... Assign -1 to each propositional symbol. Assign 0 to $\lnot$ or "N". Assign 1 to $\rightarrow$ or "C". (I'm using C and N for the example) Then we sum those numbers left to right. For example C N a1 C C C a2 C a3 a4 a5 a1 | | | | | | | | | | | | 1 1 0 1 2 3 2 3 2 1 0 -1 C C N a1 C C a2 a3 | | | | | | | | 1 2 2 1 2 3 2 1 A string will end up as a formula in Polish notation if the numbers assigned to the symbols start and end with -1, or if those numbers start with 0 or 1, has -1 correspond to the last symbol, and only has -1 corresponding to the last symbol.
What defines a convex Cluster and how it differentiates from other types?
Basically, in a convex cluster, you can draw a straight line from any point in the cluster to any other point in the cluster without leaving the cluster. For example, a U-shaped cluster would not be convex because you could not draw a straight line from one end of the U to the other without leaving the cluster and crossing across empty space. Convex and non-convex example I drew Wikipedia page that will explain in more detail
Rotation matrix around specified axis
Your rotation matrix rotates about an axis that passes through the origin. Whatever source you cribbed it from most likely mentions that somewhere. Unless you happen to be very lucky, the rotation axis defined by your two points doesn’t. So, what you’re doing is rotating about an axis that’s parallel to the one you want. There are several ways to fix this problem. The simplest for your code would be to translate lm3 by an amount that puts the rotation axis through the origin, rotate using the method you already have, then translate back. That is, rotate either lm3-lm1 or lm3-lm2, then add lm1 or lm2 as appropriate to the result.
How would I solve $(x-3)(x-2)(x-1)\gt0$?
In fact you don’t want to multiply it out: the factored form is much more useful here. You have a product of three numbers; for the moment just call them $a,b$, and $c$. When is such a product positive: it’s certainly positive if all three of $a,b$, and $c$ are positive. But it’s also positive if exactly two of them are negative and the remaining one is positive. There are the only ways to get a positive product from three factors. Now, here’s a chart of the signs of the factors $x-3,x-2$, and $x-1$ and their product: 1 2 3 ----------------|------------|------------|-------------------- x-3: - - - - - 0 + x-2: - - - 0 + + + x-1: - 0 + + + + + product: - 0 + 0 - 0 + When $x<1$, all three factors are negative, and so is their product. When $x=1$, one factor is $0$, and so is the product. When 1or $x>3$. In interval notation, the solution set of the inequality is $(1,2)\cup(x,\infty)$. This technique works whenever you’re comparing a product with $0$.
Computational complexity of Eulerian and Hamiltonian paths and cycles in (un)directed graphs
See the following theorems of a lecture “Round Trips” of our blockcourse “Algorithmic Graph Theory” by Joachim Spoerhase and Alexander Wolff. Theorem. Let $G = (V,E)$ be an undirected and connected graph. (i) We can test in $O(E)$ time, if $G$ is Eulerian. (ii) If $G$ is Eulerian, we can find in $O(E)$ time an Eulerian cycle. A problem to find an Eulerian path can be reduced to a problem to find an Eulerian cycle as follows. If we have a graph $G$ for which exists an Eulerian path then $G$ has exactly two vertices of odd degree. Add an edge $e$ between these vertices, find an Eulerian cycle on $G+e$ and then remove $e$ from the cycle, obtaining an Eulerian path in $G$. Theorem. Let $G = (V,E)$ be an directed and weakly connected graph. (i) We can test in $O(E)$ time, if $G$ is Eulerian. (ii) If $G$ is Eulerian, we can compute in $O(E)$ time an Eulerian cycle. Theorem. [Karp, 1972] (Un)directed Hamiltonian cycle and path are NP-hard.
How is the formula for the focal point of a ball lens derived?
The formula assumes the angles are small, as I will show below. The focal length may be shown by the geometry to be $$f=R \left [ \cos{(2 \theta'-\theta)}+ \frac{\sin{(2 \theta'-\theta)}}{\sin{(2 \theta - 2 \theta')}} \cos{(2 \theta-2 \theta')} \right]$$ where $\theta$ is the angle of incidence from the air, and $\theta'$ is the angle of refraction in the glass. We now assume the so-called paraxial approximation in which the sines are replaced by their arguments and the cosine is replaced by $1$. Using the paraxial form of Snell's Law: $$N \theta' = \theta$$ we find that $$f \approx R \left [ 1 + \frac{(2-N) \theta'}{2 (N-1) \theta'} \right ] = \frac{N}{2(N-1)} R $$ ADDDENDUM I will elaborate on how I got that exact formula for the focal length. Here's a diagram: Note that the solid lines represents the ray path and the dotted lines are for measurement. The spherical geometry is expressed in the fact that the trangle in the circle is isosceles. Therefore the angle $\Delta$ in the picture is $$\Delta = \pi - [\theta + (\pi - 2 \theta')] = 2 \theta'-\theta$$ Then $x=R \cos{(2 \theta'-\theta)}$. The other leg of that right triangle of which $x$ is a leg is $z=R \sin{(2 \theta'-\theta)}$. The length $y$ is determined from the angle of refraction out of the glass, which is $\theta$ by Snell's Law; the angle of the right triangle of which $y$ is a leg is $\theta-\Delta$ (alternate interior angles). Thus, $\tan{(\theta-\Delta)} = \tan{(2 \theta - 2 \theta')} = z/y$, and $$y = R \cos{(2 \theta-2 \theta')} \frac{\sin{(2 \theta'-\theta)}}{\sin{(2 \theta-2\theta')}}$$ The focal length $f=x+y$, and the result follows. ADDENDUM II You should see how well the paraxial approximation fares for this ball lens. Here I take your example value of $N=1.3$ and $R=1$, and present a plot of focal length vs. initial ray height $h$ off the optical axis (i.e., $\theta = \arcsin{(h/R)}$): Two things to note: 1) the exact focal length is less than the paraxial focal length, and 2) the paraxial approximation only works for a pencil of rays that are less than about $0.15$ from the axis.
Proof $r_n := x - \sum_{k=1}^n a_k10^{-k}$
Note that \begin{align*} [0,1/10)\cup[2/10,3/10)\cup\cdots\cup[9/10,1)=[0,1). \end{align*} There exists some $a_{1}\in\{0,1...,9\}$ such that \begin{align*} \dfrac{a_{1}}{10}\leq x<\dfrac{a_{1}+1}{10}, \end{align*} so \begin{align*} 0\leq x-\dfrac{a_{1}}{10}<\dfrac{1}{10}. \end{align*} Moreover, \begin{align*} [0,1/10^{2})\cup[2/10^{2},3/10^{2})\cup\cdots\cup[9/10^{2},10/10^{2})=[0,1/10). \end{align*} There exists some $a_{2}\in\{0,1...,9\}$ such that \begin{align*} \dfrac{a_{2}}{10^{2}}\leq x-\dfrac{a_{1}}{10}<\dfrac{a_{2}+1}{10^{2}}, \end{align*} so \begin{align*} 0\leq x-\dfrac{a_{1}}{10}-\dfrac{a_{2}}{10^{2}}<\dfrac{1}{10^{2}}. \end{align*} Proceed inductively to get the result.
Matrix of exponentials is isomorphric to the real line with additon (positive real line with multiplication)
If you consider the correspondence $$ M_x:=\left( \begin{array}{cc} e^{x}\,\,\, &0\\[15pt] 0 &e^{-x} \end{array} \right)\to x $$ then it is clear that multiplication of matrices corresponds to sums of their images, $$ \left( \begin{array}{cc} e^{x}\,\,\, &0\\[15pt] 0 &e^{-x} \end{array} \right)\cdot \left( \begin{array}{cc} e^{y}\,\,\, &0\\[15pt] 0 &e^{-y} \end{array} \right)=\left( \begin{array}{cc} e^{x+y}\,\,\, &0\\[15pt] 0 &e^{-(x+y)} \end{array} \right)\to (x+y) $$ Also, the identity matrix corresponds to $0$, the addition neutral, and the inverse of $M_x$ is $M_{-x}$. That defines an isomorphism between those matrices with matrix multiplication and the real line with addition. Also, the real line with addition is isomorphic to the positive reals with multiplication, via the exponential, $x\to e^x$, since $x+y\to e^{x+y}=e^xe^y$, etc.
Understanding $\frac{x^{1/(n+1)}-1}{2-x^{1/(n+1)}}=\frac{\log x}{n+1}+O(n^{-2})$
$$ x^{1/n}=e^{\tfrac{\log x}{n}}=1+\frac{\log x}{n}+O(n^{-2}). $$
Why does the determinant of the Dirac operator on $S^n$ approach $1$ as $n\to\infty$?
A comment posted by Anthony Carapetis cites Møller, "Dimensional asymptotics of determinants on $S^n$, and proof of Bär-Schopka conjecture," Math. Ann. 343 (2009) 35–51, https://arxiv.org/abs/0709.0067 which proves the conjecture.
Can you use the set membership symbol ∈ to indicate subsequence membership in a larger set?
It is incorrect, full stop. If you really mean that $X$ and $A$ are just sets, the correct notation is $X\subseteq A$ (or, if you want to make it clear that $X$ is a proper subset of $A$, $X\subsetneqq A$ or the like). If $X$ and $A$ are sequences, or an ordered triple and an ordered $5$-tuple, there is no standard notation; if you need to talk about this kind of relationship, you should define a suitable notation. Here, for instance, you might for instance say that $X=A[1,3]$. Or you might define $\preceq$ to mean subsequence of and write $X\preceq A$.
Can *any* real polynomial be factored linearly (plus a constant) over $\mathbb{R}$?
No, this is not possible in general. For instance, let $p(x)=x^3+x$. Note that $p'(x)=3x^2+1$ is always positive, so $p$ is strictly increasing. That means that for any $t\in\mathbb{R}$, $p(x)-t$ has at most one real root. Moreover, $p(x)-t$ cannot have any repeated real roots, since its derivative is never $0$. So there is no $t$ such that $p(x)-t$ can be factored into linear factors over $\mathbb{R}$. In fact, you can prove this without any calculus as follows. First, note that $x^3+x=x(x^2+1)$ is strictly increasing: when $x>0$ it is obviously positive and increasing, and when $x<0$ it is negative and increasing since both factors are decreasing in absolute value. So all that remains is ruling out the possibility that $x^3+x-t$ has a single real triple root for some $t$. But if $b\in\mathbb{R}$ were a triple root of $x^3+x-t$, we would have $x^3+x-t=(x-b)^3=x^3-3bx^2+3b^2x-b^3$. Comparing the quadratic terms gives $b=0$ but then the linear terms do not match, so this is impossible.
Doubt about the definition of an open subset in a Euclidean space Rn
EDIT: My initial answer was incorrect; the two characterizations are indeed equivalent. See the accepted answer on this post: Is this statement true: a set is open if every point has a closed ball contained inside of the set. I.e., you can take either open or closed balls.
Definition Of Strongly Regular Graphs
Under the usual additional hypothesis — that a strongly regular graph is also regular — your four properties are all equivalent. Suppose that all vertices in $G$ have degree $k$. If $v$ and $w$ are adjacent, then we can partition the remaining vertices of the graph into the four parts $$N_{vw}, \quad N_v, \quad N_w, \quad N_{\varnothing}$$ according to whether they are adjacent to both $v$ and $w$, just $v$, just $w$, and neither. Regularity tells us that $|N_{vw}| + |N_v| = |N_{vw}| + |N_w| = k-1$; equivalently, $|N_v| + |N_\varnothing| = |N_w| + |N_\varnothing| = n-k-1$. Together with this, either of the two possible conditions you could impose on pairs of adjacent vertices would be enough to tell us the sizes of all four parts: If we know that any two adjacent vertices have $p$ common neighbors, then $|N_{vw}| = p$, so $|N_v| = |N_w| = k-p-1$, and therefore $|N_\varnothing| = n-2k+p$. If we know that any two adjacent vertices have $p$ common non-neighbors, then $|N_\varnothing| = p$, so $|N_v| = |N_w| = n-k-p-1$, and therefore $|N_{vw}| = 2k+p-n$. So the two conditions are equivalent (but with different values of $p$). Almost exactly the same thing happens with the two conditions on pairs of non-adjacent vertices. As a result, all four possible combinations of a condition on neighbors and a condition on anti-neighbors, which superficially produce "strongly regular", "property $X$", "property $Y$", "strongly regular complement", actually produce the same class of graphs.
Jacobson's proof of $D_n = \langle x,y\mid x^n=y^2=1, (xy)^2 = 1\rangle$
I'll try to explain what (At least i think) happens: You have the elements $R$ and $S$ in the group $D_n$ and know that these elements satisfy the relations given in the brackets. Then we can look at the grouphomomorphism $$ \phi: FG^{(2)} \to D_n $$ defined by $x \mapsto R, y \mapsto S$. Since $D_n = <R,S>$ this homomorphism is still surjective. The subgroup $K$ that you mentioned is in the kernel of this homomorphism, so by the fundamental theorem of group homomorphisms we get a well defined surjective homomorphism $$ \phi': FG^{(2)}/K \to D_n$$ Now, since $|D_n|=2n$ and he shows that $|FG^{(2)}|\leq 2n$ we have a surjective map from an at most smaller set in another set, so it has to be bijective, therefore $\phi'$ is an isomorphism. Hope that helps.
Ax=b has no solution - version 3
Let me identify the matrix $A$ with the linear map $T_A \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ defined using left multiplication ($T_A(x) = Ax$). There are two basic relations between the kernel and image of $A$ (or, more precisely, $T_A$) and the kernel and image of $A^T$ given by $$ \ker(A) = \operatorname{im}(A^T)^{\perp}, \operatorname{im}(A) = \ker(A^T)^{\perp}. $$ Let's assume for a second we know those relations. Then $Ax = b$ has no solution if and only if $b \notin \operatorname{im}(A)$ if and only if $b \notin \ker(A^T)^{\perp}$ if and only if there exists $y \in \mathbb{R}^m$ such that $A^T(y) = 0$ and $\left< b, y \right> = b^T y \neq 0$. Thus, it is enough to prove that $\operatorname{im}(A) = \ker(A^T)^{\perp}$. We don't need the second relation but it is useful to remember and it follows from the first relation by taking $A = A^T$ and using $(A^T)^T = A$ and $(V^{\perp})^{\perp} = V$. Let $y \in \ker(A^T)$ and let $b \in \operatorname{im}(A)$. Choose $x$ such that $Ax = b$. Then $$ \left< Ax, y \right> = \left< x, A^T y \right> = 0 $$ which shows that $\operatorname{im}(A) \subseteq \ker(A^T)^{\perp}$. Since $$ \dim \ker(A^T)^{\perp} = m - \dim(\ker(A^T)) = m - (m - \operatorname{im}(A^T)) = \operatorname{rank}(A^T) = \operatorname{rank}(A) = \dim \operatorname{im}(A)$$ we see that the dimensions of both vectors spaces are equal and so we must have $\operatorname{im}(A) = \operatorname{ker}(A^T)$.
How can we characterize polynomials in $\mathbb{R}^2$ that are harmonic
This is a much-studied question.
A question about the polynomials $1+2x+\cdots+(2n+1) x^{2n}$, $n \in \mathbb Z_+$
We have: $$ P_n'(x)=\sum_{k=1}^{2n}k(k+1)x^{k-1}=\frac{2}{(1-x)^3}\left[1-(n+1)(2n+1)x^{2n}+4n(n+1)x^{2n+1}-n(2n+1)x^{2n+2}\right] $$ so $P_n'(-1)=-2n(n+1)$ and $P_n(-1)=n+1$. Since: $$ \frac{d}{dx}\left((1-x)^3 P_n'(x)\right)= 2n(2n+1)(2n+2)(1-x)^2 x^{2n-1} $$ it follows that $P_n(x)$ has an absolute minimum in the interval $(-1,0)$. Since: $$ \sum_{k\geq 0}(k+1)x^k = \frac{1}{(1-x)^2} $$ and the convergence is uniform over any compact subset of $(-1,0]$, every point is quite easy to prove.
Help generating a grammar with {a,b,c} where the number of as and bs are different or bs and cs, or both
Look at it like this, at least one of the following has to be true: the number of $a$'s is larger than the number of $b$'s the number of $b$'s is larger than the number of $a$'s the number of $b$'s is larger than the number of $c$'s the number of $c$'s is larger than the number of $b$'s So we can make a grammar for each of these, and then simply combine them. A grammar for the first condition looks like this: \begin{align*} S_1&\to aXC\\ X&\to aX \mid aXb\mid\varepsilon\\ C&\to cC\mid\varepsilon \end{align*} The first rule means there is at least one $a$ in the string, which is necessary. The second rule means that either we can add $a$'s to the front, or we can add both an $a$ to the front and a $b$ to the middle. Finally, the number of $c$'s doesn't matter. Now we can make similar grammars with starting variables $S_2$, $S_3$ and $S_4$ for the other three conditions above, and then we can combine these with the rule \begin{align*} S\to S_1\mid S_2\mid S_3\mid S_4 \end{align*}
Troubles with finding closest point on a plane to the origin
Note, the line that passes through origin and is normal to the plane is $$x=\frac{5t}{\sqrt{75}},\>\>\>y=\frac{-7t}{\sqrt{75}},\>\>\>z=\frac{t}{\sqrt{75}}\tag 1$$ Substitute above in the equation of the plane $5x-7y+z=-21$ to obtain $t=-\frac{21}{\sqrt{75}}$. Plug it into (1) to obtain the point on the plane, $$(-\frac{7}{5},\frac{49}{25},-\frac{7}{25})$$
Several Questions on Smooth Urysohn's Lemma
I find a proof from Lee's Introduction to smooth manifold 2ed (page 47), the answer is positive. Briefly, it suffices for any closed subset $K$ to assign a smooth function $f:\mathbb{R}^n\to \mathbb{R}_{\geq 0}$ such that $f^{-1}(0)=K$. For any closed set $K$, $\mathbb{R}^n\setminus K$ is a union of countably many small balls $\{B_{r_i}(x_i)\}$.Then pick a common smooth function $\chi:\mathbb{R}^n\to \mathbb{R}_{\geq 0}$ which takes value $0$ when and only when $x\notin (-1,1)$. Then define $$f(x)=\sum_{i=1}^\infty\frac{r_i^i}{2^i C_i}\chi\left(\frac{x-x_i}{r_i}\right)$$ where $r_i$ can be assumed to be $\leq 1$, and $C_i\geq 1$ such that $C_i\geq \max_{|\alpha|\leq i}||\partial^{\alpha} \chi||$. Then $\partial^{\alpha} f$ is bounded by $\frac{1}{2^i}$ when $i\geq |\alpha|+1$ so it converges uniformly, thus $f$ is smooth. It's not difficult to check that $f(x)=0\iff x\in A$.
Rigorously prove that this is not the minimum bounding circle of the triangle.
I will leave the details to you, but the idea is as follows: Assume there exists a smallest bounding circle for a triangle with just one point $A$ on its circumference. Given such a circle, show that you can construct a smaller circle which still contains the triangle by moving the centre of the circle in the direction of $A$, and shrinking the circle by an amount (which will depend on how much you moved the centre). Since this is a smaller circle, your assumption that there existed such a smallest circle was false, and so no smallest bounding circle can have just one point on its circumference.
Showing product of disjoint cycle
Trace what happens to each element: in $(ab)(cd)$ the element $d$ goes to $c$ which then is not changed; in $(dac)(abd)$ the element $d$ goes to $a$ which then goes to $c$. So $d$ ends up at $c$ in both cases. Check $a,b,c$ in the same way. And any element $x$ other than $a,b,c,d$ is not affected by either permutation, so ends up as $x$ itself in both cases.
Finding autocovariance of AR(2)
The first computation is correct, just be carefull just after the "so we get": $$ \gamma(0) = \frac{2\phi_1\phi_2\gamma(1) - \sigma^2}{1-\phi_1^2-\phi_2^2} $$ The second one is ok: $$EX_t^2 = \phi_0 EX_t+ \phi_1 EX_t X_{t-1}+ \phi_2 EX_t X_{t-2} + E[X_t Z_t] $$ Now substract $$ [EX_t]^2 = \phi_0 [EX_t]+ \phi_1 EX_t EX_{t-1}+ \phi_2 EX_t EX_{t-2} + EX_t E Z_t $$ and using $\rm{cov}(X_t,Z_t) = \sigma^2$ you get: $$ \gamma(0) = \phi_1 \gamma(1)+ \phi_2 \gamma(2) + \sigma^2 $$ This is a particular case of the Yule-Walker equations.
What is the difference of defining a vector field on a manifold and a $C^{k-1}$ vector field on a $C^{k}$ manifold?
If $\gamma$ is an integral curve for the $C^{k-1}$ vector field $X$, then $\gamma$ is $C^{k}$ (its derivative is $X,$ which can be differentiated a further $k-1$ times). Thus, for $\gamma$ to exist, you should have a notion of $k$'th derivative.
Find the complete solution of differential equation
This equation, $\;y' + y \;= \; 2e^{-x} \; – \; 1,\;$ is first order linear and thus it can be solved by finding an integrating factor. The integrating factor is $\;\exp\left(\int p(x) \, dx\right) \; = \; \exp\left(\int 1 \, dx\right) \; = \; e^{x},\;$ so multiply both sides by $e^{x}.$ This gives $$e^{x}y' \; + \;e^{x}y \;\; = \;\; e^{x} \cdot 2e^{-x} \; - \; e^{x} \;\; = \;\; 2 \;– \; e^x $$ The left side "factors" as the derivative of $y$ times the integrating factor. This always happens because of the way the integrating factor was defined. Your book will probably show this, but I'll do it below anyway. In class I used to explain how this is analogous to solving a quadratic equation by completing the square. $$(ye^{x})' \; = \; 2 \; – \; e^x $$ Note that by using the product rule you can verify that $\;(ye^{x})' = e^{x}y' + e^{x}y.\;$ Now integrate both sides: $$ye^{x} \; = \; 2x \; – \; e^x \; + \; C$$ Now divide both sides by $e^x$ and you'll get $$y \; = \; 2xe^{-x} \; – \; 1 \; + \; Ce^{-x}$$ Here's why the integrating factor stuff works. The general equation has the form $$y' \; + \; p(x)y \; = \; q(x)$$ Multiply both sides by $e^{\int p(x) \, dx},$ which is the integrating factor: $$ e^{\int p(x) \, dx}y' \; + \; e^{\int p(x) \, dx}p(x)y \;\; = \;\; e^{\int p(x) \, dx}q(x)$$ I claim that the left side can be "factored" to give $$ (ye^{\int p(x) \, dx})' \;\; = \;\; e^{\int p(x) \, dx}q(x)$$ To see that these two left hand sides are the same, use the product rule to expand $(ye^{\int p(x) \, dx})',$ which gives $$y' \cdot e^{\int p(x) \, dx} \;\; + \;\; y \cdot \frac{d}{dx} \left( e^{\int p(x) \, dx}\right)$$ $$ y' \cdot e^{\int p(x) \, dx} \;\; + \;\; y \cdot e^{\int p(x) \, dx} \cdot \frac{d}{dx}\int p(x) \, dx $$ $$ y' \cdot e^{\int p(x) \, dx} \;\; + \;\; y \cdot e^{\int p(x) \, dx} \cdot p(x)$$ In the last three equations, I made use of the $(e^u)' = e^{u}u'$ formula for differentiating "$e$ raised to a power" and the fact that the derivative undoes the antidifferentiation operation. Finally, a natural question that textbooks don't always discuss is why we choose "$C=0$" when performing the integration to obtain the integrating factor. The most instructive way I know to show students is to pick a specific equation, say the equation solved above, and instead of using $e^x$ for the integrating factor, use $\;e^{x + 5}\;$ and see what happens. In effect, all you're doing is multiplying both sides by an additional nonzero constant factor, which neither contributes anything nor messes anything up (except the details are very slightly messier).
What are the necessary and sufficient properties for a function to be called a RNG?
Answering the question in the OP comment (how to convert the PRG generating numbers 1..5 into the PRG generating numbers 1..7): The algorithm used in codegolf boils down to repeat x:= rand5()+5*rand5()-5; until x<8; return(x); The expression rand5()+5*rand5()-5 generates numbers $1..25$ with equal probability $1/25$. The repeat-until loop implements conditioning on $x<8$, and in the conditional model the only possible numbers are $1..7$, generated with equal probability $1/7$.
If $m$ and $n$ belong to $A$, then $m+n$ belong to $A$.
Some hints: For a) you should consider Bezout's identity. Note that if $n\in A$ then every multiple of $n$ is also in $A$. For b) use induction. For c), let $n\ge n_0^2$ and let $k=n-n_0^2$. Use Euclidean division to write $k=qn_0+r$. Then $n=n_0((q+1)n_0-r)+r(n_0+1)$.
Expression simplification.
$$|x+1|-|x-1|=$$ $(x+1)-(x-1)=2$ if $x\geq 1$ $(x+1)-(1-x)=2x $ if $-1\leq x\leq 1$ $(-x-1)-(1-x)=-2$ if $x\leq -1$.
Parabola Question: How to derive an appropriate equation based on specific criteria?
Here's a simple way to parameterize it. Let's say that the horizontal distance the rabbit would travel if his starting and ending heights were the same is $d$. First suppose that the rabbit starts and ends at the same height, and that it takes 1 time unit to make the jump. Then $d=p_2-p_2$, then you can choose its $x$ and $y$ coordinates as a function of time to be $$x(t) = dt\tag{1}$$ $$y(t)=2dt(1-t)\tag{2}$$ which gives the starting and ending positions correctly, and ensures that the rabbit is at height $d/2$ halfway through the jump. If you like, you can substitute one into the other to get $$y = 2x(1-x/d) \tag{3}$$ Now suppose that instead, the horizontal distance of the jump is different from $d$. That is, he would jump the distance $d$, but the vertical height of his landing place is different, so he is either cut short (if he's jumping up) or he goes a bit further (if he's jumping down). We can use the formula (3) as above, but demand that when his vertical height is $\Delta y=y_2-y_1$, he has moved a total horizontal distance $\Delta p=p_2-p_1$: $$\Delta y= 2\Delta p(1-\Delta p/d)$$ which you can rearrange to give $d$ in terms of the other parameters: $$d = \frac{(\Delta x)^2}{\Delta x - \tfrac{1}{2}\Delta y}$$ You can now use formula (3) with this value of $d$ to get the rabbit's path, or if you want a parameterization in terms of time, then use formulas (1) and (2) with this value of $d$. If you want the rabbit to take $T$ seconds to do the jump, rather than 1 second, then simply replace $t$ with $t/T$ on the right-hand side of formulas (1) and (2).
Solve $2a^b-b=1997$ in $\mathbb{N}$
Clearly there are no solutions with $a=1$, so we can assume that $a\ge2$. If $b\ge10$, then $2a^b\ge2\cdot2^{10}=2048$ which is too large (subtracting $b$ won't help, because $f(x)=2^x-x$ is increasing when $x\in[2,\infty)$). So $b=1,2,3,4,5,6,7,8$ or $9$. Quick checking leaves $b=1, a=999$ and $b=3, a=10$ as the only possibilities.
Intensional Set Definitions like $\{ x | A(y) \}$
In the framework of axiomatic set theory, you cannot do that as you'll need some axiom to justify the existence of a set and in cases like these you'd use the axiom of separation for which you'd need another set. In other words, if $Y$ is a set, then $\{x \in Y \mid A(x)\}$ is a set, no matter what $A(x)$ is. But there's no general axiom that makes something like $\{x \mid A(x)\}$ a set. $\{x \mid 1 = 1\}$ would be the class (note: class, not set) of all sets. And even something like $\{x \mid x \neq 42\}$ would result in a proper class. See Russell's paradox.
Source for similarity between Leibniz formula and binomial theorem?
You can check out the wikipedia page: https://en.wikipedia.org/wiki/General_Leibniz_rule. You will find that the proof by induction of Leibniz formula is almost exactly the same as the binomial formula, and it is a purely algebraic proof in the sense that it is just about manipulating sums and products. But just like you don't remember and understand the binomial formula by remembering the proof by induction, I believe the best way to think about Leibniz formula is to understand that it is merely a sum over all the possible paths obtained by choosing which one of $f$ or $g$ you take the derivative in a series of $n$ possibilities. The post you refer to gives an excellent explanation of this. Hope this helps!
For $(x_1,x_2), (y_1,y_2) \in \mathbb{R}^2$, does $⟨(x_1,x_2),(y_1,y_2)⟩={x_1}^2{y_1}^2 + {x_2}^2{y_2}^2$ define an inner product?
Since $\langle(2,0),(1,0)\rangle=4\ne2\langle(1,0),(1,0)\rangle$, no, it is not linear in the first variable, and therefore $\langle\cdot,\cdot\rangle$ is not an inner product.
Simplifying an exponential expression
$$\frac{(-1)^kk(k+1)}{2} + (-1)^{k+1}(k+1)^2$$ $$\Rightarrow (-1)^k(k+1)[\frac{k}{2} + (-1)(k+1)]$$ $$\Rightarrow (-1)^k(k+1)[\frac{k+(-2)(k+1)}{2}]$$ $$\Rightarrow \frac{(-1)^k(k+1)}{2}[k+(-2)(k+1)]$$
$\{X_n\}$ are iid random variables with symmetric distribution
Here is an answer for $n=2$. Let $A=\{x | |x_1+\cdots x_n| \ge \|x\|_\infty \}$. Also note that if $x\in A$, then $-x \in A$. Let $B=\{ x | x_1 = 0 \text{ or } x_2 = 0 \}$, we see that $B \subset A$. Let $Q_k^\circ$ be the interior of the four quadrants, we see that $Q_1^\circ \subset A$ and hence $Q_3^\circ \subset A$. Note that $B,Q_k^\circ$ form a partition of $\mathbb{R}^2$. Since the distributions are symmetric, we see that $P Q_1^\circ = \cdots =P Q_4^\circ$, and since $PB + 4 P Q_1^\circ =1$, we see that $P A \ge PB + 2 P Q_1^\circ \ge {1 \over 2} (PB + 4 P Q_1^\circ) = {1 \over 2}$.
if $p\implies q$ is the same as $\lnot p \lor q$, then...
Well, $p \to r$ is the same as $\neg p \vee r$. Thus if $r=\neg q$, this means that $p \to \neg q$ is the same as $\neg p \vee (\neg q)$, which by order of logical operations can be written $\neg p \vee \neg q$. No need to worry about De Morgan's laws!
Divisors, Sections, Vanishing
Consider the Cartier divisor $D=\Sigma p_kD_k-\Sigma n_lE_l$ ( with $p_k, n_l>0$ ) and a covering $(U_i)$ of $X$ such that on each $U_i$ the Cartier divisor $D\cap U_i$ is given by a rational function $f_i\in \mathcal {Rat}(U_i)$. The line bundle $\mathcal L=\mathcal O(D)$ may be defined by the Cech cocycle $g_{ij}=f_i/f_j \in \mathcal O^\ast(U_i\cap U_j)$. A better, more concrete, description of $\mathcal L =\mathcal O(D)$ is that as a sheaf it is the sub-$\mathcal O_X$-Module $\mathcal L\subset \mathcal {Rat}_X$ which restricted to $U_i$ equals $\mathcal L |U_i=\frac{1}{f_i} \mathcal O_{U_i}$. The line bundle $\mathcal L$ then has a canonical rational section $s=1 \in \Gamma(X, \mathcal L)\subset \mathcal Rat(X)$. We have trivializations of $\mathcal L$ attached to our data, namely $\mathcal L|U_i \stackrel {\simeq}{\longrightarrow} \mathcal O_{U_i}:g\mapsto gf_i$, mapping $s|u_i=1\in \Gamma(U_i, \mathcal L)$ to $f_i\in \Gamma (U_i,\mathcal {Rat}(U_i))$ And now it is completely tautological that the divisor of $s$ is $D$, since that divisor is given by the $f_i$'s on $U_i$. Remarks 1) As you see, the fact that $X$ is projective is actually irrelevant. 2) The construction works on any scheme, however singular it may be, as long as you consider Cartier divisors. 3) If your scheme is separated, noetherian, integral and locally factorial, you can indeed associate to each Weil divisor a unique Cartier divisor and then apply the preceding considerations. But this is rather a different issue: your question is really about cartier divisors.
Upper and Lower Darboux integral of a piecewise function $f(x)=x$ and $f(x)=0$.
Any interval in any subdivision of $[a,b]$ must contain a rational and irrational point, since the intervals are not singletons nor empty. This is because the irrationals and rationals are both dense in $\mathbb{R}$. We then have that the supremum of $f$ on any non-empty, non-singleton closed interval $[x,y]$ is $\sup[x,y] = y$ and the infimum of $f$ on any interval is $0$. So the upper sum of any uniform partition $P_n:a=t_0< t_1< ...<t_n=b$, with $t_k = a+\frac{k(b-a)}{n}$ must be $$U(f,P_n) =\sum_{k=1}^{n} (t_k-t_{k-1}) t_k=\sum^{n}_{k=1} (\frac{b-a}{n}) (t_k)=\sum^{n}_{k=1} (\frac{b-a}{n})(a+\frac{k(b-a)}{n}) = \sum^{n}_{k=1} (a(\frac{b-a}{n})+(\frac{b-a}{n})\frac{k(b-a)}{n}) = a(b-a) + \frac{(b-a)^2}{n^2}\sum^{n}_{k=1} k $$ $$= a(b-a) + \frac{(b-a)^2}{n^2}\frac{1}{2} (n)(n+1)= a(b-a) + \frac{1}{2} (b-a)^2(1+\frac{1}{n}) $$ since the supremum of any interval must be on the right hand side. Now take the infimum of all the upper sums of the uniform partitions, and you'll note that it is, $$a(b-a) + \frac{1}{2} (b-a)^2(1) = ab-a^2 + \frac{1}{2}b^2-ab+\frac{1}{2}a^2 = \frac{1}{2}b^2-\frac{1}{2}a^2$$ So the upper integral is at most $$\frac{1}{2}b^2-\frac{1}{2}a^2$$. If $Q = \{q_0,...,q_N\}$ is any partition of $[a,b]$, then $$U(f, Q) = \sum_{i=1}^N q_i (q_i-q_{i-1}) = \sum_{i=1}^N q_i^2 -\sum_{i=1}^Nq_i q_{i-1} \geq \sum_{i=1}^N q_i^2 - \frac{1}{2} \sum_{i=1}^N (q_i^2 + q_{i-1}^2) $$$$= \frac{1}{2} \sum_{i=1}^N (q_i^2 - q_{i-1}^2) =\frac{1}{2}(\sum_{i=1}^N q_i^2 - \sum_{i=1}^N q_{i-1}^2) = \frac{1}{2}(b^2-a^2)$$ The inequality is because: $$(x-y)^2 \geq 0 \implies x^2-2xy+y^2 \geq 0 \implies -2xy \geq -x^2-y^2 $$$$\implies 2xy \leq x^2 +y^2 \implies xy \leq \frac{1}{2}(x^2+y^2)$$ The last equality is because the squares cancel (telescoping sum) for all $q_i \neq a,b$. So the upper integral is at least $\frac{1}{2}(b^2-a^2)$ since it is the greatest lower bound of the upper sums of all the partitions. Hence the upper integral is $\frac{1}{2}(b^2-a^2)$. The lower sum of any partition of $[a,b]$ must be $0$ as the infimum on any of the subintervals is $0$. So the supremum of the lower sums over any partition must be $0$. Hence the lower integral is $0$. Does that get you started?
Products of Symmetric Matrices
Suppose $A= \begin{bmatrix} 2 & 0 &0 \\ 0 & 2 & 0 \end{bmatrix}, B=\begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix}$ Then $AB$ is invertible, but $A,B$ are not invertible. You can't have $AB$ and $BA$ both are invertible while $A,B$ both are not invertible since $rank(AB) \leq \min(rank(A),rank(B)$. For square matrix, $det(A)det(B) = det(AB)$ so you can't have square matrices as your example.
Complex integral help: $\oint_C \frac{\sin(z)}{z(z-\pi/4)} dz$
I don't know how you computed those residues, but you got the wrong value. The residue at $0$ is indeed $0$, but the residue at $\dfrac\pi4$ is $\dfrac{\sin\left(\frac\pi4\right)}{\frac\pi4}=\dfrac{2\sqrt2}\pi$. Therefore, your integral is equal to $4\sqrt2i$. And it is a serious error to assert that if the integral was $0$, then the integrand would be analytical.
Confusion regarding the definition of addition modulo n on the group $Z_n$.
Well, both are correct. If $a,b$ are integers, the addition mod $n$ is defined as $$[a]+[b] = [a+b]$$ where $[a+b]$ is the congruence or residue class mod $n$. So if $a+b$ is not a remainder mod $n$, a number from $0$ to $n-1$, then divide $a+b$ by $n$ with remainder: $a+b =qn+r$, where $0\leq r<n$, and so $$[a+b]=[r].$$ There are two basic facts to use: (1) Each integer $a$ lies in the same residue class as its remainder $r$ mod $n$, i.e., if $a=qn+r$ with $0\leq r<n$, then $[a]=[r]$. (2) For distinct remainders $r,r'$ mod $n$, the residue classes are distinct, i.e., $[r]\ne[r']$.
Prove that the $1*1$ tile should be in the center
Hint: You're on the right track. Note that your particular coloring isn't the only way to color like this -- you can, for example, start with $ABCAB$ from right to left in the first row instead. Can you narrow down the possibility of the position of the $1\times 1$ tile using different colorings?
Proof verification: Every Euclidean space is complete
No induction needed. Every projection map is distance decreasing so if $(\vec{v})_n$ is Cauchy so are all sequences $(\vec{v}_i)_n$ for $i=1,\ldots,N$, if we're working in $\mathbb{R}^N$. By completeness of the reals, each of these has a limit $p_i$. And a sequence converges in $\mathbb{R}^N$ iff it converges in every coordinate, so $(\vec{v})_n \rightarrow (p_1,\ldots,p_N)$, as required.
Little Graph Theory Problem
No. Let $G$ be the disjoint union of $H_1=C_5$ and $H_2=P_5$. Every edge of $G$ belongs to exactly one of the $H_i$ and the $H_i$ have the same number of vertices, but not the same number of edges.
Help! Do not know how to solve ODE using homogeneity of differential equations : $xydx+(2x^2+3y^2)dy$
Everything was fine until you've made a silly mistake. It should be $$\large{v+x\frac{dv}{dx}=-\frac{v}{2+3v^2}}$$ $$\large{x\frac{dv}{dx}=-\frac{3v+3v^3}{2+3v^2}}$$ $$\large{\frac{2+3v^2}{3v+3v^3}}dv=-xdx$$
Why $\mathfrak{g} \subseteq V \otimes V^\ast$?
It is perhaps more useful to think of $V\otimes V^{*}$ linear maps from $V$ to $V$, with the commutator given by $[f,g]=f\circ g-g\circ f$. This is (isomorphic to) the Lie algebra of $GL(V)$.
$V$ is the direct sum of its weight spaces
You can express $\mathfrak g$ itself as $\mathfrak h\oplus\bigoplus_{\alpha\in R}\mathfrak g_\alpha$, whre $R$ is the set of roots. Now, consider the sum $S$ of the weight spaces; you want to prove that $S=V$. You know that $S\ne\{0\}$ , since you are working over $\Bbb C$, which is algerbaically closed, and therefore the set of wights is not empty. On the other hand, by the definition of $S$, if $H\in\mathfrak h$ and $v\in S$, then $Hv\in S$. And if $X\in\mathfrak g_\alpha$, for some root $\alpha$, and if $v\in V_\beta$, for some weight $\beta\in\mathfrak h^*$, then $Xv\in V_{\alpha+\beta}\in S$. But then $S$ is a submodule which is not $\{0\}$. So, since $V$ is irreducible, $S=V$.
Lagrange interpolation and Weierstrass theorem.
This is a hint is rather than a full solution. This is a problem where you must assert control over the term $I(f) - I_n(f)$ where $I(f) = \int_a^b f(x) dx$. To that end, you must introduce terms over which you have control. By assumption, there is a sequence of functions $\{f_m\}_{m=1}^\infty$ such that $f_m \rightarrow f$ uniformly on $C([a,b])$ and such that $I_n(f_m) \rightarrow I(f_m)$ for $n \rightarrow \infty$ and each fixed value of $m$. This suggest to me that we should write $$ I(f) - I_n(f) = \big[ I(f) - I(f_m) \big] + \big[I(f_m) - I_n(f_m) \big]+ \big[I_n(f_m) - I_n(f)\big].$$ Now try to assert control over each of the three terms in the square brackets. You will need to use your assumption about the weights $W_{j,n}$. Check to see if you are allowed to assume that the $w_{j,n}$ are non-negative.
For System dependent on normally distributed parameter, are deviations added or variations?
If A and B are normally distributed random variables that are independent and have variances σ$^2$a and σ$^2$b respectively and both have mean 0 then if C=A+B, C will be normally distributed with mean 0 and variance σ$^2$a+σ$^2$b. So σc=√(σ$^2$a+σ$^2$b) and not σa + σb. The reason is that for independent random variables the variance of the sum is the sum of the variances.
Positive definite matrix to be cancalled
Positive definiteness is rather a spectral property than a "component-wise" one. A randomly generated example shows that the statement is not true: $$ A=\begin{bmatrix}2 & 3 \\ 3 & 5\end{bmatrix}, x=\begin{bmatrix}-1\\3\end{bmatrix}, Ax=\begin{bmatrix}7\\12\end{bmatrix}. $$ This is true for M-matrices, which in the symmetric case happen to be positive definite. Not all positive definite matrices are M-matrices though.
Question about calculation of higher-order infinitesimal
Let us take a look at the first claim. We see that $$ \frac{\frac{o(x^2)}{x}}{x} \; = \; \frac{o(x^2)}{x^2}$$ which tends to $0$ as $x \to x_0$ by definition of $o(x^2)$. Hence, $\frac{o(x^2)}{x}=o(x)$. Now, for the second claim, consider the functions $x\mapsto x^3$, $x\mapsto x^2$ and $x_0=0$. What does this simple example tell us?
Cylindrical shells word problem. Did I set this up correctly?
You are almost correct. Look specifically at your definition of radius : $r = y-1$ It appears that you are blending your concepts of how you define your radius with how you define the bounds of integration. To get an intuition for where you are going wrong, simply draw a rough sketch of the 3-D shape you are creating, superimposed on the Cartesian coordinate system. From there, draw the outer most "shell" you could imagine on that shape, with thickness $\Delta y$ and measure the radius to the outer edge of that shell as well as the inside of the shell. I think you will quickly see where you have gone wrong! Additionally, here is some reading on the derivation of the "Shell Method", that was helpful to me: http://mathonline.wikidot.com/calculating-volumes-cylindrical-shell-method
Free $\mathbb{Z}$-modules
A finite abelian group $G$ is not free as a $\mathbb{Z}$-module. Indeed, suppose $|G|=n$ and let $p$ be a prime not dividing $n$. Take an abelian group $H$ having an element $x$ of order $p$. Then there is no homomorphism $f\colon G\to H$ such that $x$ belongs to the image of $f$, because such image is annihilated by $n$, whereas $x$ is not. More generally, a finitely generated free module over the ring $R$ has the form $R^n$, for some $n$.
Finding the integral
HINT:You can solve the intgral in this way: $$\int{\frac{x^3}{x^4-2x+1}}\,\,dx=$$ $$\frac{1}{4}\int{\frac{4x^3+2-2}{x^4-2x+1}}\,\,dx=$$ $$\frac{1}{4}\int{\frac{4x^3-2}{x^4-2x+1}}\,\,dx+\frac{1}{4}\int{\frac{2}{x^4-2x+1}}\,\,dx=$$ $$\frac{1}{4}ln|x^4-2x+1|+\frac{1}{4}\int{\frac{2}{x^4-2x+1}}\,\,dx=$$ and then you have to factor the denominator.
Method to find solution for $a^x \equiv \mod n$
Hint $$(6-1)^n\equiv(-1)^n+(-1)^{n-1}6n\pmod{36}$$ Check for odd$(2m+1)$ & even$(2m)$ values of $n$
Can a game with a pure strategy Nash equilibrium also have mixed strategy equilibria?
I am not sure about the first question, but for the second question it definitely exists: consider a game with two players who can both choose left $(L,l)$ and right $(R,r)$ and the following payoff matrix: Clearly, there are two pure strategy Nash equilibria: $(L,l)$ and $(R,r)$. However, there is also a mixed strategy Nash equilibrium where Player 1 chooses $L$ with probability $\frac{2}{3}$ and Player 2 chooses $l$ with probability $\frac{1}{3}$.
What algorithm is used by computers to calculate logarithms?
It really depends on the CPU. For intel IA64, apparently they use Taylor series combined with a table. More info can be found here: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.5177 and here: http://www.computer.org/csdl/proceedings/arith/1999/0116/00/01160004.pdf