title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Is the unit sphere in $\mathbb{R}^n$ a zero-content set?
Assuming the $n$-sphere is embedded in $\mathbb R^{n+1} $ (Since it cannot be embedded in $\mathbb R^n$) , then its n-th dimensional measure is zero. A compact set of measure zero haszero content. I am saying yes, using total boundedness and the general relationship : Compact equals Complete + Totally Bounded. Totally Bounded means we can cover the space ( the n-sphere here ) by finitely-many sets of any fixed size.
Showing 2 matrices span the same subspace?
You don't mean "$A$ and $B$ span the same subspace", you mean the columns of $A$ and $B$ span the same subspace, i.e. the column space of $A$ is the column space of $B$. Your condition should be that there is an invertible $n \times n$ matrix $C$ such that $B = AC$. To see this, note that each column of $AC$ is a linear combination of the columns of $A$ with coefficients given by the entries of the corresponding column of $C$. Moreover, if $B = AC$, $A = B C^{-1}$. (2) is really just the same (again with the stipulation that $C$ is invertible), because $\|B - AC\|_2^2 = 0$ means that every entry of $B-AC$ is $0$. (3) could run into a bit of a problem if done in the obvious way, because you don't know if the $C$ you get from linear regression will be invertible. Maybe better to first find a basis of the column space of each matrix.
Trouble isolating a variable in a simple equation.
If you are trying to solve this analytically for $t$ that's impossible. If $\alpha=n$ is an integer this is a polynomial equation of order $n-1$, and if $\alpha$ is a fraction it can be reduced to one of some degree. But polynomial equations of degree higher than $4$ can not be solved analytically according to the Galois theory. Alternative approach is to look for positive zeros of the function $f(t):=(t + \tau)^\alpha - t^\alpha - \beta$. The derivative $f'(t)=\alpha\big((t + \tau)^{\alpha-1} - t^{\alpha-1}\big)$ is strictly negative for $t>0$ if $0<\alpha<1$, $\tau>0$, so the function is strictly monotone decreasing and there is at most one positive zero. The second derivative $f''(t)=\alpha(\alpha-1)\big((t + \tau)^{\alpha-2} - t^{\alpha-2}\big)$ is strictly positive under the same conditions, so the function is convex down. For finding zeros of strictly monotone convex functions Newton's method is very effective. It produces a sequence converging fast to the zero with any close enough initial guess. This sequence is monotone, so the error can be well estimated.
Where is my mistake calculating $\int_{-\infty}^{\infty}\frac{x\sin(x)}{x^2+4}~dx$?
Doesn't it look strange to you that an integral expected to be positive (the main contribute comes from a neighbourhood of the origin) turns out to be negative? The issue is that such function does not fulfill Jordan's lemma (the contribute given by the integral on the semicircle arc does not vanish as the radius of such arc goes to infinity). The escape route is the following: $$ \int_{\mathbb{R}}\frac{x\sin(x)}{x^2+4}\,dx = \text{Im}\int_{\mathbb{R}}\frac{x e^{ix}}{x^2+4}\,dx =\text{Im}\left[2\pi i\cdot\text{Res}\left(\frac{z e^{iz}}{z^2+4},z=2i\right)\right]\tag{1}$$ and it gives the expected result $\color{red}{\large\frac{\pi}{e^2}}$.
Exponential vs Erlang Distribution
The relationship between the exponential and Erlang distributions is exactly analogous to the relationship between a geometric and negative binomial random variable. A geometric random variable counts the number of Bernoulli trials needed to obtain the first success. A negative binomial random variable generalizes this idea to count the number of trials needed to obtain $r$ successes. Thus a geometric distribution is a special case of the negative binomial with $r = 1$. Another way to conceptualize a negative binomial random variable is that it is the sum of $r$ independent and identically distributed geometric random variables. Similarly, an exponential random variable counts the time until the first event of interest occurs. An Erlang random variable counts the time it takes to observe $r$ events. And so it is also the sum of $r$ IID exponential random variables. The following claim is incorrect: Erlang refers to the $x^{\rm th}$ occurrences within a set time interval while Exponential also refers to the intervals. The incorrect part of this claim is "within a set time interval." There is no such set time interval; we are only interested in the time it takes for the $x^{\rm th}$ occurrence, no matter how long that may be. As for your example questions: Not quite. If arrivals are Poisson distributed with rate $\lambda$ per unit time, then the time between two consecutive shipment arrivals is simply exponential with rate $\lambda$. Since the exponential distribution is a special case of the Erlang with $r = 1$, it is "Erlang" with shape parameter $r = 1$ and rate $\lambda$. What is genuinely (nontrivially) Erlang, however, is the total time until we observe two arrivals, which includes the time it takes to wait for the first arrival, plus the time between the first arrival and the second. In this case, the total waiting time is Erlang with shape $r = 2$ and rate $\lambda$. As already implied above, the interrarival time is exponential for a (homogeneous) Poisson process.
Alternative proof that $\mathbb{Q}$ is dense in $\mathbb{R}$
I don't know Rudin's proof, but this one is correct and quite standard. The only thing it uses is that $\mathbb{R}$ is archimedian, meaning that for every $a,b>0$ there exists $n\in \mathbb{N}$ such that $na>b$. You use that when you say that there is $n$ such that $n(y-x)>1$ so there must be an integer between $nx$ and $ny$. (You also have to know that there is an integer between two numbers $a,b$ such that $|a-b|>1$).
prove that following set is closed in $\mathbb{R}^n$
Your proof is O.K. Between 4. and 5. you should spend " let $\varepsilon >0$ be given".
Convergence of Cauchy's sequence
Not always. A space in which every Cauchy sequence is a convergent sequence is called a complete space, but not every imaginable space is complete. The set of real numbers is complete, which means that a Cauchy sequence of real numbers will have a real limit. Other sets, like interval $(0,1)$ or the set of rational numbers $\mathbb Q$ are not complete, and the Cauchy sequence of numbers from them do not have to have a limit in these sets. However, when you have a non-complete set, you can always construct its completion, by adding new elements to this set in such a way that the result will be a complete set. So you can say that every Cauchy sequence of elements of some space has a limit in the completion of this set, but not necessarily in the set itself. For example, the completion of interval $(0,1)$ is interval $[0,1]$, and the completion of $\mathbb Q$ is $\mathbb R$.
Does There Exist a LaTeX/Word Rewrite of Georg Cantor's Works?
All of Cantor's works? I doubt it. plug There in one Cantor paper (translated into English) here: Edgar, Gerald A. (ed.), Classics on fractals, Reading, MA: Addison-Wesley Publishing Company. x, 366 p. (1993). ZBL0795.28007. It is Cantor's paper “On the power of perfect sets of points” where the "Cantor set" can be found. It does have explanations about outdated notations and terminologies.
What is $\int_{0}^1 x^{- m x} dx$ when $m$ is large?
This is a laplace method integral. You can write the integrand as $e^{-mg(x)}$ for $g(x)=x\log x.$ Then you Taylor expand $g(x)$ about its local minimum at $x=1/e$ as $$g(x)\approx -1/e +\frac{e}{2}(x-1/e)^2.$$ Finally changing the limits of the integral to $\pm\infty$ only introduces exponential error terms. So you have a Gaussian integral to do and you get $$ I(m) = \sqrt{\frac{2\pi }{em}} e^{m/e}(1+ O(1/\sqrt m))$$
Solve for $b$ in $a \mathbin{\oplus} b = b-a $
$a \oplus b = b - a$ $a \oplus b = a + b - 2 \cdot (a \wedge b)$ $a + b - 2 \cdot (a \wedge b) = b - a$ $2 \cdot a = 2 \cdot (a \wedge b)$ $a = a\wedge b$ (works forwards or backwards) So, $b$ can be anything as long as it has all of $a$'s bits set.
How many ultrafilters there are in an infinite space?
To show that all finite intersections of sets in $\mathfrak{B}_{\mathscr{S}}$ have cardinality $|X|$ it suffices just to construct $|X|$-many elements in the intersection. (This is because, as you have noticed, there cannot be more than $|X|$-many elements in the intersection.) In the proof given, we have one particular element of this intersection: $$( F = \{ x_{ij} : i \neq j \} , \varphi = \{ F \cap S_1 , \ldots , F \cap S_k \} ).$$ Suppose that $( F , \psi ) \in \mathscr{F} \times \Phi$ is such that $\phi \supseteq \varphi$ is finite. Given any $i \leq k$ note that we clearly have that $S_i \cap F \in \psi$, and so $( F , \psi ) \in \mathfrak{B}_{S_i}$. Given $k < j \leq n$ note that $( F , \psi ) \in - \mathfrak{b}_{S_j}$ as long as $S_j \cap F \notin \psi$. Therefore as long as $\psi \supseteq \phi$ is chosen so that $F \cap S_{k+1} , \ldots , F \cap S_{n} \notin \psi$, then $( F , \psi )$ will belong to the intersection. There are clearly $|X|$-many ways to choose appropriate $\psi$. Let $\mathfrak{B} = \{ \mathscr{A} \subseteq \mathscr{F} \times \Phi : | ( \mathscr{F} \times \Phi ) \setminus \mathscr{A} | < |X| \}$ denote the family of all subsets of $\mathscr{F} \times \Phi$ with complement of power $< |X|$. Note that not only does $\mathfrak{B}$ have the finite intersection property, it is actually closed under finite intersections. With this observation and the work above it becomes relatively easy to show that given $\mathscr{S} \subseteq \mathcal{P} ( X )$ the family $\mathfrak{B}_{\mathscr{S}} \cup \mathfrak{B}$ has the finite intersection property. To see this, suppose that $\mathfrak{b}_{S_1} , \ldots , \mathfrak{b}_{S_k} , - \mathfrak{b}_{S_{k+1}} , \ldots , - \mathfrak{b}_{S_n} , \mathscr{A}_1 , \ldots , \mathscr{A}_m$ are given. Then by the work above the set $\mathfrak{b} = \mathfrak{b}_{S_1} \cap \cdots \cap \mathfrak{b}_{S_k} \cap - \mathfrak{b}_{S_{k+1}} \cap \cdots \cap - \mathfrak{b}_{S_n}$ has power $|X|$, and by the observation above the complement of $\mathscr{A} = \mathscr{A}_1 \cap \cdots \cap \mathscr{A}_m$ has power $< |X|$. Thus $\mathfrak{b} \cap \mathscr{A} \neq \emptyset$. Therefore this family can be extended to an ultrafilter $\mathfrak{U}_{\mathscr{S}}$, and since $\mathfrak{B} \subseteq \mathfrak{U}_{\mathscr{S}}$, we know that $\mathfrak{U}_{\mathscr{S}}$ cannot include any sets of power $< |X|$.
If rank of $(m+1)\times n$ matrix is $m+1$, then some $(m+1)\times (m+1)$ submatrix has non-zero determinant.
Hint: If the rank is $m+1$, then the matrix must have $m+1$ linearly independent columns.
Finding Eigenvectors
$$\lambda_1=\frac{1}{2}\sqrt{5}+\frac{1}{2}$$ $$\lambda_2=\frac{1}{2}-\frac{1}{2}\sqrt{5}$$ Because $$ \begin{bmatrix} 1 & 1 \\ 1 & 0 \\ \end{bmatrix} $$ $$A-\lambda I= \begin{bmatrix} 1-\lambda & 1 \\ 1 & -\lambda \\ \end{bmatrix} $$ $$\det(A-\lambda I)=(1-\lambda)(-\lambda)-1\cdot1$$ $$=\lambda^2-\lambda-1$$
Infinite Sum Axioms in Tohoku
So let us suppose we have a family of monomorphisms $A_i \to B_i$, where $i$ varies over an indexing set $I$. Let $A = \bigoplus_i A_i$ and $B = \bigoplus_i B_i$; we want to show that $A \to B$ is also a monomorphism. Let $K$ be the kernel, so that we have an exact sequence $$0 \longrightarrow K \longrightarrow A \longrightarrow B$$ Let $\mathcal{J}$ be the system of all finite subsets of $I$: this is a filtered poset, and if we define $A_j = \bigoplus_{i \in j} A_i $ and $B_j = \bigoplus_{i \in j} B_i$ for each finite subset $j$ of $I$, we get a filtered system. In any abelian category, given a diagram $$\begin{alignedat}{3} 0 \longrightarrow \mathord{} & K_j & \mathord{} \longrightarrow \mathord{} & A_j & \mathord{} \longrightarrow \mathord{} & B_j \\ & \downarrow && \downarrow && \downarrow \\ 0 \longrightarrow \mathord{} & K & \mathord{} \longrightarrow \mathord{} & A & \mathord{} \longrightarrow \mathord{} & B \\ \end{alignedat}$$ if the rightmost vertical arrow is a monomorphism and both two rows are exact, then the left square is a pullback square; since $A_j \to A$ is also a monomorphism, this amounts to saying that $K_j = A_j \cap K$. Since $j$ is a finite set, $A_j \to B_j$ is automatically a monomorphism, so $K_j = 0$. Taking the filtered colimit over $\mathcal{J}$, we obtain $$\sum_j K_j = \left( \sum_j A_j \right) \cap K = A \cap K = K$$ so $K = 0$. Thus, $A \to B$ is a monomorphism. In fact, AB5 is strictly stronger than AB4. Take $\mathcal{A} = \textbf{Ab}^\textrm{op}$. The opposite of an abelian category is an abelian category, and it is not hard to show that $\mathcal{A}$ satisfies AB3 and AB4; but $\mathcal{A}$ does not satisfy AB5. This is the same example used by Grothendieck in his paper.
Calculus Problem general polynomial limit to infinity
Hint: Note that for positive $x$ at which $[P(x)]^{1/n}$ is defined, we have $$\sqrt[n]{P(x)}=x\sqrt[n]{1+\frac{b_1}{x}+\cdots+\frac{b_n}{x^n}}.$$ Let $t=\frac{1}{x}$. We want the limit as $t$ approaches $0$ from the right of $$\frac{\sqrt[n]{1+b_1t +\cdots +b_n t^n}-1}{t}.\tag{1}$$ We can now use L'Hospital's Rule. Or just the definition of the derivative. Or else we can make estimates of the $n$-th root. Or else we can rationalize the numerator. Let $u=\sqrt[n]{1+b_1t+\cdots +b_nt_n}$. Then $u-1=\frac{u^n-1}{u^{n-1}+u^{n-2}+\cdots+1}$.
How to get "N" from $k=\log_2(N)$
Yes, that's correct. More generally, if $k = \log_b(N)$ then $N = b^{\log_b(N)} = b^k$.
Axiom of Choice as a tool for proving measurability
There's definitely something along these lines we can do, but it's surprisingly difficult to do exactly: we need to talk about different "universes" in which our mathematics takes place, and how we can transfer facts between them appropriately. Unfortunately, the following answer is rather technical - that's sort of the point, that technical issues are essential here - but hopefully I've managed to pare things down to the point that it's actually interesting: The key idea here is that of an inner model (of ZF + whatever). The precise definition of the term is rather technical ("a transitive proper class satisfying the ZF axioms (+ whatever)"), but the intuition is that it's just a more restricted universe for doing math, which - by virtue of being more limited - might have nicer properties. (An inner model could also be much worse of course, but we're only going to be interested in the nice ones for now.) Under reasonable set-theoretic assumptions (so-called "large cardinal hypotheses"), there are lots of inner models $M$ with the following properties: $M$ satisfies ZF + DC (= a weak choice principle which prevents really stupid stuff from happening) + "Every set of reals is measurable." Every real number is in $M$. If $X\subseteq\mathbb{R}$ and $X\in M$, then $X$ is measurable. (That third condition actually comes for free from the second, but that takes an argument.) We will call such models "nice." The initial idea is that when we write a definition $\varphi$ of a set $S$ in the "real world," we can "copy" that definition in a nice inner model $M$ containing those parameters to get "$M$'s version of $\varphi$" - which I'll call "$\varphi^M$" - and by the niceness of $M$ conclude measurability of the original set. Unfortunately, we've got problems. Our first issue is that our original definition $\varphi$ might involve some parameters - e.g. $\{x: f(x)=g(x)\}$ involves $f$ and $g$. Now we can't "run that definition inside $M$" unless $M$ also contains $f$ and $g$. So we can't use any old inner model. Much worse, however, is the following: when we switch from the "real world" to an inner model, definitions go wild. For example, consider the definitions "$\{x: x\not=x\}$" and "$\{x:$ $x=17$ and the axiom of choice fails$\}$." In reality, these each correspond to the same set, namely the emptyset. But now suppose $M$ is a nice inner model. $\{x: x\not=x\}^M$ still defines $\emptyset$, but $\{x:$ $x=17$ and the axiom of choice fails$\}^M$ defines $\{17\}$ (since $M$ thinks the axiom of choice fails). So here's what we need to do to make everything work. Suppose we have a definition $\varphi$ (using "parameters" $a_1,...,a_n$) of a set of reals $S$, and we want to show that $S$ is measurable. First, we show that $\varphi$ is absolute: if $M$ is any nice inner model and $r$ is a real number, then $M$ thinks $r$ is in the set defined by $\varphi$ iff $r$ really is in the set defined by $\varphi$. (This isn't a given! Besides the toy example above, there are more serious examples: e.g. as Andreas mentions, it's possible that there is a fully-definable non-measurable set; so even if choice isn't involved explicitly in $\varphi$, "choiciness" could still play a hidden role.) Next, we show that there is a nice inner model $M$ with $a_1,...,a_n\in M$. (This also isn't a given! For a trivial example, suppose one of the parameters is itself a non-measurable set.) The second step lets me "run $\varphi$ inside $M$;" the resulting set of reals ("$M$'s version of $S$") is measurable, and by the first step that set is the actual $S$ we care about. We can also use this approach to prove "template" theorems, applying to many definitions at once. However, all of this takes serious work, and relies on set-theoretic hypotheses beyond ZFC. The end result is: It takes a lot of work to make the argument you have in mind precise, let alone actually work, and ultimately requires more axioms than standard set theory actually uses. Does that mean that "definition-examination" is never helpful to prove measurability? Not at all! But we have to be more fine-grained in our examination, and look at the syntactic complexity of the definition: syntactic complexity is connected to topological complexity, and so we can sometimes argue that a set is measurable by saying something like: "The definition of this set has such-and-such syntactic form, so it's (say) Borel, and all Borel sets are measurable." This is part of descriptive set theory.
Two monotone simplexes in the product space are "disjoint"
It's enough to show this in the special case where $$x_i = (\underbrace{1,1,\dots,1}_{i}, \underbrace{0,0,\dots,0}_{p-i}) \qquad \text{and} \qquad y_j = (\underbrace{1,1,\dots,1}_{j}, \underbrace{0,0,\dots,0}_{q-j}).$$ In this case, for any monotone simplex $\sigma$, $\sigma_k$ is a point with $k$ coordinates that are $1$ and $p+q-k$ coordinates that are $0$. From $k$ to $k+1$, exactly one coordinate changes from $0$ to $1$. In other words, each monotone simplex $\sigma$ is a coordinate permutation of the simplex which has \begin{align} \sigma_0 &= (0,0,\dots,0) \\ \sigma_1 &= (1,0,\dots,0) \\ \vdots\ &= \ \vdots \\ \sigma_{p+q} &= (1,1,\dots,1). \end{align} The interior of this simplex consists of all points $(u_0,u_1, \dots, u_{p+q})$ with $$1 > u_0 > u_1 > \dots > u_{p+q} > 0.$$ After a permutation $\pi$ of the coordinates, we get a simplex $\sigma_\pi$ whose convex hull consists of all points $(u_0, u_1, \dots, u_{p+q})$ with $$1 > u_{\pi(0)} > u_{\pi(1)} > \dots > u_{\pi(p+q)} > 0.$$ All monotone simplices can be written in this way, though we also get some non-monotone simplices. (We get a monotone simplex if the relative ordering of the coordinates coming from $x$, and the relative ordering of the coordinates coming from $y$, are maintained. But this doesn't really matter.) Given two different monotone simplices $\sigma_\pi$ and $\sigma_{\pi'}$, there will be some $i$ and $j$ such that $\pi^{-1}(i) < \pi^{-1}(j)$ but $\pi'^{-1}(i) > \pi'^{-1}(j)$. In that case, all points in the interior of $\sigma_\pi$ satisfy $u_i < u_j$, but all points in the interior of $\sigma_{\pi'}$ satisfy $u_i > u_j$, so they are disjoint. To reduce to this special case, note that any simplex in $\mathbb R^p$ is an affine transformation of the $x$ simplex above, and any simplex in $\mathbb R^q$ is an affine transformation of the $y$ simplex above. The product of these two transformations is an affine transformation of $\mathbb R^{p+q}$. So for arbitrary $x$, $y$ in general position, we can turn their monotone simplices into the monotone simplices above by an affine transformation. But affine transformations are bijective and preserve linear inequalities. The interiors of the images of two monotone simplices will be disjoint, so the interiors of any two monotone simplices will themselves be disjoint.
Given an exact differential; $df=yz\,dx+xz\,dy+(xy+a)\,dz$: Why must we integrate each term independently to find the parent function $f\,$?
Observe that $P,Q,R$ are not mere constants but functions themselves as well, i.e. $P=P(y,z)$ and $Q=Q(x,z)$ and $R=R(x,y)$. This is because $x,y,z$ are not at all independent but are functionally dependent by some relation. So your method of integrating all together does not work. Note that when you are integrating by your method, you are considering $2$ variables independent of the integrating variable. And if you use the 3rd equation, it most aptly describes the function from among the other options. And it can be realized if you write $P,Q,R$ as $P=P(y,z)$ and $Q=Q(x,z)$ and $R=R(x,y)$.
transience and recurrence of a random walk
$$p_{00}^{2n}={2n\choose n}p^n(1-p)^n= {1\over 4^n}{2n\choose n}\times [4p(1-p)]^n\leq [4p(1-p)]^n.$$ The sum of the right hand side is a convergent geometric series if $p\neq 1/2$. Added: I think I understand your problem better now. I hope this is what you want; let me know if anything is unclear. You want to know how $X_n\to\infty$ implies $\sum_n p^n_{00}<\infty$. To make a direct connection between the sum and the random walk, let $N=\sum_{n=0}^\infty 1_{(X_n=0)}$ denote the total number of visits to state $0$. Then $\sum_n p^n_{00}=E(N)\leq\infty$. Now define the return time to zero as $T:=\inf(n>0: X_n=0)$. By the strong Markov property, the random variable $N$ is geometric with probability of success $P(T=\infty)$. If $P(T=\infty)=0$, then $P(N=\infty)=1$ which contradicts $X_n\to\infty$. Therefore, we have $P(T=\infty)>0$ and $E(N)={1\over P(T=\infty)}<\infty$, which shows that the random walk is transient. Further calculations would give the explicit formula $E(N)=1/(1-2p)$.
How to calculate this determinant (Finding characteristic polynomial).
Hint: There a simple method, you have just to do: \begin{align} (x-2)\begin{vmatrix} x-3 & 0 \\ -1 & x+1 \end{vmatrix} + (-1)(-3)\begin{vmatrix} 1 & 0 \\ -1 & x+1 \end{vmatrix} + (-2)\begin{vmatrix} 1 & x-3 \\ -1 & -1 \end{vmatrix} \end{align} Can you continue from here?
Finding stationary points of $f(x,y) = x^2+y^2+\beta xy+x+2y$
You got: $$\begin{bmatrix}2 & \beta \\ \beta & 2 \end{bmatrix}\begin{bmatrix}x\\y \end{bmatrix} = \begin{bmatrix}-1\\ -2 \end{bmatrix}$$ Let's perform one step of row reduction: $$\begin{bmatrix}1 & \frac{1}{2}\beta \\ 0 & 2-\frac{1}{2}\beta^2 \end{bmatrix}\begin{bmatrix}x\\y \end{bmatrix} = \begin{bmatrix}-\frac{1}{2}\\ \frac{1}{2}\beta-2 \end{bmatrix}$$ The system has no solution if $2-\frac{1}{2}\beta^2=0$ but $\frac{1}{2}\beta-2 \neq 0$, so when $\beta=2$ or $\beta=-2$. The system has infinitely many solutions if $2-\frac{1}{2}\beta^2= 0$ and $\frac{1}{2}\beta-2 = 0$, which cannot happen. When $\beta\neq 2$ and $\beta\neq -2$ we do one more step of row reduction: $$\begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}\begin{bmatrix}x\\y \end{bmatrix} = \begin{bmatrix}-\frac{1}{2}-\frac{1}{2}\frac{\beta^2-4\beta}{4-\beta^2}\\ \frac{\beta-4}{4-\beta^2} \end{bmatrix}$$ You can now readily read the solution.
How prove $ax(a+x)+by(b+y)+cz(c+z)\ge 3(abc+xyz)$
It's enough to prove that $x^2a+y^2b+z^2c+xyz\geq4abc$. Let $x^2a+y^2b+z^2c+xyz<4abc$, $x=kp$, $y=kq$ and $z=kr$, where $k>0$ and $p^2a+r^2b+q^2c+pqr=4abc$. Hence, $p^2a+r^2b+q^2c+pqr=4abc>x^2a+y^2b+z^2c+xyz=k^2(p^2a+r^2b+q^2c+kpqr)$, which says that $0<k<1$ and $a+b+c=x+y+z=k(p+q+r)<p+q+r$, which is contradiction because we'll prove now that $a+b+c\geq p+q+r$, where $a$, $b$, $c$, $p$, $q$ and $r$ are positives such that $p^2a+r^2b+q^2c+pqr=4abc$. Indeed, we'll rewrite the last condition in the following form. $$\frac{p^2}{4bc}+\frac{q^2}{4ac}+\frac{r^2}{4ab}+\frac{pqr}{4abc}=1.$$ Let $\cos\alpha=\frac{p}{2\sqrt{bc}}$, $\cos\beta=\frac{q}{2\sqrt{ac}}$ and $\cos\gamma=\frac{r}{2\sqrt{ab}}$. Hence, $$\cos^2\alpha+\cos^2\beta+\cos^2\gamma+2\cos\alpha\cos\beta\cos\gamma=1,$$ which says that $\alpha+\beta+\gamma=180^{\circ}$ and we need to prove that $$a+b+c\geq2\sqrt{bc}\cos\alpha+2\sqrt{ac}\cos\beta+2\sqrt{ab}\cos\gamma.$$ Let $\Delta ABC$ is a triangle such that $\measuredangle A=\alpha$, $\measuredangle B=\beta$ and $\measuredangle C=\gamma$, $\vec{u}\uparrow\uparrow \vec{CB}$, $|\vec{u}|=\sqrt{a}$, $\vec{v}\uparrow\uparrow \vec{BA}$, $|\vec{v}|=\sqrt{b}$, $\vec{w}\uparrow\uparrow \vec{AC}$, $|\vec{w}|=\sqrt{c}$ and since $$(\vec{u}+\vec{v}+\vec{w})^2\geq0,$$ we are done!
Showing a function is not bounded on any open interval
You have a solid start at an approach to showing that $f$ is a nowhere continuous function. https://en.wikipedia.org/wiki/Nowhere_continuous_function However, there are a few subtleties which you might want to consider revising. In order to both use your lemma and directly address the main problem, you'll need to begin and end the proof from the perspective of establishing that $f$ is a nowhere continuous function. To achieve this, you could simply nest your current proof by contradiction for $f$ being unbounded on every open interval in $A$ into a proof by contradiction which demonstrates that $f$ is discontinuous everywhere in $A$. The latter proof will be short and sweet, since all of the dirty work is taken care of by the proof you currently have. In fact, $f$'s nowhere continuity follows very nicely and clearly from your current proof and lemma. Claiming that $n(b-a) > 1$ is faulty, because this is true iff $b-a > \frac{1}{n}$, which is not necessarily true given that $a$ and $b$ are both arbitrarily selected. As Jair Taylor mentioned in one of the comments, an easy way to handle this issue is to select $n \in \mathbb{N}$ such that $b-a > \frac{1}{n}$, which is fully justified by using the Archimedean principle: For all positive real numbers $x \in \mathbb{R}_{>0}$, there exists $n \in \mathbb{N}$ such that $\frac{1}{n} < x$. To make the proof regarding $f$'s unboundedness on $(a,b)$ fully complete, you'll have to conclude with a fraction in reduced form. As currently laid out, $\frac{m}{n}$ is not necessarily in reduced form, because $gcd(m,n)=1$ is not guaranteed by your current exposition. An easy way around this is to take care of the situation in which $gcd(m,n) = d > 1$ by including something like "If $gcd(m,n) = d > 1$, then let $m = id, \ n = jd$ with $i,j \in \mathbb{N}$ and write $\frac{m}{n} = \frac{c}{d}$, so that you have an equivalent fraction in reduced form.
Is $f(n)=\begin{cases} \frac{n}{2}&\text{if}~n~\text{is even}\\ \frac{-n-1}{2}&\text{if}~n~\text{is odd}\end{cases}$ a bijection?
Inverse function if $n$ negative then $f^{-1}(n) = 2n+1$ if n is positive $f^{-1}(n)=2n$. It is easily verified that this maps all integers$-0$ to all natural numbers. Note however the function as phrased in the question is not a bijection because its not surjective. There is not a natural number that get sent to $0$.
Multiple choices for a single case in the recursive formula of a Dynamic Programming algorithm
I think it's cleanest if you think about formulating your DP something like this: $$z_i = \min_{a\in A(t_{i-1})} \{f_i(a)\},$$ where $A(t_{i-1})$ is the set of available actions you can take, given the value of $t_{i-1}$, that is: $$A(t_{i-1}) = \begin{cases} \{C_{i-1}, C_{i-1}-2\}, & \text{if } t_{i-1}=int, \\ \{C_{i-1}\}, & \text{if } t_{i-1}=app,\end{cases}$$ or something like that. $f_i(a)$ is the cost/reward function value if you take action $a$ for job $i$. You haven't said what your objective function is, and I don't know what $p$ or $r(j)$ are, but presumably those can be incorporated into $f_i(a)$. I'm sure I'm getting some of the notation and concepts wrong, but in general I suggest that you distinguish between the state ($t_{i-1}$), the actions ($C_{i-1}-2$, etc.), and the costs/rewards ($f_i(a)$).
problem with right triangle
One way to do. 1) look at the big triangle $ABC$. You know two distances and know that it is a right triangle, so you can get the angles, using formulas for cosinus and sinus in a right triangle. 2) look at the triangle $ACE$. You know all angles and you know the length $AC$ so with the formula $a/\sin(\alpha)=b/\sin(\beta)=c/\sin(\gamma)$ you get all lengths. 3) look at the triangle $CDE$. You know all angles and one length, so you can find all lengths with the formula above. This gives you $ED$.
Gaussians going towards delta "functions"
Let $X_k$ be i.i.d. normal random variables with zero mean and variance $\sigma^2 = \frac{1}{n}$. Define $Y_n = \sum_{k=1}^n X_k^2 = \frac{1}{n} \sum_{k=1}^n Z_k^2$, where $Z_k$ are i.i.d. standard normal variables. (Already from this form, $Y_n \to \mathbb{E}(Y_n)$ with probability 1 by the law of large numbers). Clearly: $$ \mathbb{E}(Y_n) = \frac{1}{n} \sum_{k=1}^n \mathbb{E}(Z_k^2) = \frac{1}{n} \sum_{k=1}^n 1 = 1 $$ By the law of total variance: $$ \mathbb{Var}(Y_n) = \frac{1}{n^2} \sum_{k=1}^n \mathbb{Var}(Z_k^2) = \frac{1}{n^2} \sum_{k=1}^n \mathbb{E}((Z_k^2-1)^2) = \frac{1}{n^2} \sum_{k=1}^n \left(\underbrace{\mathbb{E}(Z_k^4)}_{=3} - 2 \underbrace{\mathbb{E}(Z_k^2)}_{=1} + 1\right) = \frac{2}{n} $$ Now, using Chebyshev's inequality: $$ \mathbb{P}\left( |Y_n - 1| > \epsilon \right) < \frac{\mathbb{Var}(Y_n)}{\epsilon^2} $$ Hence for arbitrary $\epsilon$, and $\delta > 0$ there exists $m \in \mathbb{N}$, such that for all $n > m$, $\mathbb{P}( | Y_n -1| > \epsilon) < \delta$, i,e. $Y_n$ converges in probability to 1.
How do I find the $\lim_{x\to 0}\frac{7x^3-4x^2}{\sin(3x^2)}$ without using L’Hopital’s?
So we have $$\lim_{x\to 0} \frac{7x^3 - 4x^2}{\sin(3x^2)}.$$ First, we have to factor the numerator so $x^3 - 4x^2 = x^2(7x - 4)$. Therefore, we have $$\lim_{x\to 0} \frac{x^2(7x - 4)}{\sin(3x^2)}.$$ After this, we multiply the numerator and the denominator by $3x^2$. Note $\frac{3x^2}{3x^2}=1$ so we aren't doing anything arbitrarily and it is still equivalent. We know that $$\lim_{x\to 0} \frac{\sin(x)}{x}=1 \text{ and } \lim_{x\to 0}\frac{x}{\sin(x)}=1$$ (you have to use a calculator or L'Hopital's theorem to check this). By doing this multiplication, we get $$\lim_{x\to 0} \frac{3x^2 x^2 (7x - 4)}{3x^2 \sin (3x^2)}.$$ And since this is similar to $x / \sin(x)$ because $x / \sin(x)$ is essentially $1 \cdot 1 / (1 \cdot \sin(1 \cdot x))$, the limit as $x$ approaches $0$ of $(x / \sin(x)) = (3x^2 / 3x^2 \sin(3x^2)$ must be true due to proportionality or similarity. After this, we can simplify the limit as $x$ approaches $0$ of $(3x^2 \cdot x^2 \cdot (7x - 4))/(3x^2\sin(3x^2))$ to the limit as $x$ approaches $0$ of $(x^2(7x - 4)/x^2)$. After this, we simplify the $x^2$ in the numerator and the denominator thus leaving the limit as $x$ approaches $0$ of $(7x -4)$ which is $-4/3$. I hope this makes sense though!
Why the clique numbers of almost all graphs are concentrated at two values?
I believe that Paul Hudford's answer thought that the question is about the chromatic number, whereas the question is about the clique number of the random graph. Another inaccuracy with Paul's answer (although it has nothing to do with the question asked) is that in the paper being referred to, Heckel proves anti-concentration for intervals of length $\sim n^{1/4}$ (not $\sim n^{1/2}$), but recently in joint work with Riordan, Heckel announced a proof of an optimal bound of the form $\sim n^{1/2}$: https://www.math.princeton.edu/events/non-concentration-chromatic-number-random-graph-2019-11-06t200000 To answer the original question, it is important to realize that when a statement is made about almost all graphs, the object in question is $G(n,1/2)$, as opposed to $G(n,p)$ where $p$ is a parameter that shrinks with $n$. When $p$ is some other value that is constant, usually the situation is similar. Something that is true is the following: $G(n,1/2)$ with high probability has its clique number to be one of two special values, and these values are near $2\log(n)$. The intuition is as follows: Let $X_k$ be the random variable counting the number of $k$-cliques in $G(n,1/2)$. If $\mathbb{E}(X_k)\to 0$, by Markov's inequality, $G$ almost surely does not contain a $k$-clique. If $\mathbb{E}(X_k)\to \infty$ on the other hand, calculating the variance will reveal that almost surely $G$ almost surely contains a $k$-clique. What needs to be true for two-point-concentration to hold is that $\mathbb{E}[X_k]$ has a phase shift that occurs around $k\sim2\log(n)$. You can calculate that the ratio $\frac{X_{k+1}}{X_k}=o(1)$ when $k$ is close to $2\log(n)$. What this means is that $\mathbb{E}[X_k]$ cannot stay some constant value for more than a couple of values of $k$. Say for example that $\mathbb{E}[X_k]\sim 10$ for some special value of $k$ (near $2\log(n)$). Then, the result about the ratio I'm referring to shows that $\mathbb{E}[X_{k+1}]\to 0$ and $\mathbb{E}[X_{k-1}]\to \infty$, which already gives a three-point-concentration result. For more details, I suggest Zhao's notes available here (pages 34-35): http://yufeizhao.com/pm/pmnotes.pdf
If $a \in A$ and $b \in B$ then $2a \in B$ and $2b \in A$ and $(a+b)^{2014}\in C$
Hint If $m \in A$, then $m=3k+1$ for some $k \in \mathbb{Z}$. So $2m=6k+2=3(2k)+2$. Since this is of the form $3s+2$, therefore $2m \in B$.
Differentiability of $z \, Log(1+\overline z)$
For $z=0$, using directly the limit definition you see the derivative is $0$. And for non-zero values, we just need to investigate $ \log (1+\bar{z})=f(z)$ , and it is differentiable nowhere. Note that $e^{f(z)}=1+\bar{z}$ is not differentiable. So, it is differentiable only at the origin and nowhere analytic.
Let $A$ be an abelian group such that $End_{Ab}(A)$ is a field of characteristc 0,prove $A \cong \Bbb{Q}$
Hint:If the $\mathbb{Q}$-dimension is >1, take two elements of $A$, $e_1,e_2$ which are $\mathbb{Q}$-linearly independent, consider a non trivial nilpotent linear map $B$ of $Vect_{\mathbb{Q}}(e_1,e_2)$ and show that it induces a nilpotent element of $End_{Ab}(A)$.
Is there a name or theory for this sieve characteristic?
It's called a wheel sieve as you found out in the comments. It actually can have a derivation but quite honestly not much point to it. Here's how it would go: $\gcd(a,b)=1\implies\gcd(a-b,b)=1$ that is, if $b$ is a remainder class mod $a$ that can have primes, so is $a-b$ . this is basically using the distributive property and factoring out. If we have to values that aren't relatively prime, then we can factor out a factor greater than 1 (and primes don't have factors greater than 1 that aren't themselves). So it turns out just pick all the relatively prime remainder classes. The number of relative primes to a number is always even, and the sequence will always be symmetric around the halfway point through the repeating part. You really want a formula for Euler's phi function: $$\varphi(n)=n\prod_{d\mid n,d\in \Bbb{P}}(1-{1 \over d})$$ this tells you how long the repeat part possibly goes on for. As to the values of differences, they'll typically be divisible by primes you eliminated.
Proof of the pumping lemma for Context-Free Languages
There does appear to be an error in the argument. It can be fixed by taking $p$ initially to be $b^{|V|+1}$: we may assume that $b>1$ (as otherwise the language is finite and therefore regular), so $b^{|V|+1}\ge 2b^{|V|}\ge b^{|V|}+1$, and we can still argue that a syntactic tree for $s$ must have height at least $|V|+1$.
Stolz Cesaro Application
Let's use the differentiable version of Stolz Cesaro's Lemma: The L'Hospital rule. Let $f(x) = \frac{x-1}{\ln x}$. Then $$\lim_{n\to \infty} \frac{b_n - 1}{\ln b_n} = \lim_{n\to \infty} f(b_n) = \lim_{x\to 1} f(x)$$ if the limit exist. Since $$\lim_{x\to 1} \frac{(x-1)'}{(\ln x)'} = \lim_{x\to 1} \frac{1}{1/x} =1,$$ By L' Hospital Rule, we have $$\lim_{x\to 1} f(x) = 1\Rightarrow \lim_{n\to \infty} \frac{b_n -1}{\ln b_n} = 1.$$
Are these proofs of convergence sufficient
The series converges. Indeed:$$\dfrac{x+1}{x^3+4}\sim_{\infty}\frac {1}{x^2}$$ that converges
Why is $ 2n^2\log n+3n^2 \notin \Omega(n^3)$?
When compared to polynomials, logarithms are vastly smaller. For instance, $\log^{100} x < x^{1/100}$ for sufficiently large $x$. So for loose asymptotics, it's not unreasonable to "ignore logs." So morally, you are asking if $n^2 > c n^3$ for some positive constant $c$ and sufficiently large $n$, and this is clearly not the case. Another common replacement is to replace $\log x$ by $x^{\epsilon}$ for some really small $\epsilon$. If you want to be concrete, perhaps $\epsilon = 0.01$. Then you're asking about $n^{2.01} > c n^3$, which is also clearly not the case for large $n$.
Kernel of the map $D^2+aD+b : k[[X]] \to k[[X]]$ , where $D :k[[X]] \to k[[X]]$ is the usual derivative map
You are trying to solve the differential equation $$\frac{d^2 y}{d x^2} + a \frac{d y}{d x} + b = 0,$$ which, on the level of power series is $$n (n-1) a_n + a (n-1) a_{n-1 }+ b a_{n-2} = 0,$$ for $n \geq 2.$ Otherwise, if the roots of $P$ are real, this will be a linear combination of two exponentials, otherwise of sines and cosines.
Statements equivalent to $\sum\limits_{n=1}^{\infty} b_n$ diverging.
Your intuition does not apply to series whose partial sums are bounded, but nevertheless do not converge to a limit, such as $\sum (-1)^n$. If you restrict yourself to series with strictly positive terms, then this is true since, as you note, a divergent positive series has the property that its partial sums are arbitrarily large.
When is it appropriate to neglect the arguments of the function?
The bar in $$f(x \mid \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2} } e^{ -\frac{(x-\mu)^2}{2\sigma^2} }$$ indicates that if we have a normal distribution with a " given " mean and standard deviation, then the density function is $$\frac{1}{\sqrt{2\pi\sigma^2} } e^{ -\frac{(x-\mu)^2}{2\sigma^2} }$$ Most authors do not use this notation and use $N(\mu, \sigma)$ to emphasize that the distribution is normal with the given mean and standard deviation. You may as well write $$f(x) = \frac{1}{\sqrt{2\pi\sigma^2} } e^{ -\frac{(x-\mu)^2}{2\sigma^2} }$$ without any problems.
Universal object of forgetful functor Ring and Set
Yep, that's right. $p(x)\mapsto p(r)$. Such a homomorphism is uniquely determined by $r$.
How to compute the expected minimum Hamming distance with 3 strings
I realized that my original answer was unnecessarily complicated. I’m preserving it below, but here’s a more efficient approach. As below, without loss of generality fix the first string to be all zeros. Now consider the counts $a_{ij}$ of positions in which the second and third string have the values $i$ and $j$, respectively. In contrast to the variables in the original solution, these four variables are all on an equal footing. They are all close to $\frac n4$. The Hamming distances are \begin{eqnarray} h_{12}&=&a_{10}+a_{11}\;,\\ h_{13}&=&a_{01}+a_{11}\;,\\ h_{23}&=&a_{01}+a_{10}\;, \end{eqnarray} and $s=\frac12\sum_{ij}a_{ij}$ and $\Delta_{ab}=h_{ab}-s$ form an orthogonal basis of the space. The count of strings is \begin{eqnarray} 2^{-2n}\binom n{a_{00},a_{01},a_{10},a_{11}} &=& 2^{-2n}\frac{n!}{\prod_{ij}a_{ij}!} \\ &\approx& 2^{-2n}\frac{\sqrt{2\pi n}}{\prod_{ij}\sqrt{2\pi a_{ij}}}\exp\left(n\log n-\sum_{ij}a_{ij}\log a_{ij}\right)\;. \end{eqnarray} As below, we can take the square roots to be constant, so they yield a factor $2^4(2\pi n)^{-\frac32}$. With \begin{eqnarray} 2a_{00}&=&s-\Delta_{12}-\Delta_{13}-\Delta_{23}\;,\\ 2a_{01}&=&s-\Delta_{12}+\Delta_{13}+\Delta_{23}\;,\\ 2a_{10}&=&s+\Delta_{12}-\Delta_{13}+\Delta_{23}\;,\\ 2a_{11}&=&s+\Delta_{12}+\Delta_{13}-\Delta_{23}\;,\\ \end{eqnarray} we get \begin{eqnarray} && 2^{-2n}2^4(2\pi n)^{-\frac32}\iiiint\prod_{ij}\mathrm da_{ij}\delta\left(\sum_{ij}a_{ij}-n\right)\min(h_{12},h_{13},h_{23})\exp\left(n\log n-\sum_{ij}a_{ij}\log a_{ij}\right) \\ &\approx& 2^4(2\pi n)^{-\frac32}\iiiint\mathrm d\Delta_{12}\mathrm d\Delta_{13}\mathrm d\Delta_{23}\mathrm ds\delta(2s-n)\left(\frac n2+\min(\Delta_{12},\Delta_{13},\Delta_{23})\right)\exp\left(-\frac1{2n}\right. \\ && \left.\vphantom{\frac1{2n}}\left((-\Delta_{12}-\Delta_{13}-\Delta_{23})^2+(-\Delta_{12}+\Delta_{13}+\Delta_{23})^2+(\Delta_{12}-\Delta_{13}+\Delta_{23})^2+(\Delta_{12}+\Delta_{13}-\Delta_{23})^2\right)\right) \\ &=& \frac n2+2^3(2\pi n)^{-\frac32}\iiint\mathrm d\Delta_{12}\mathrm d\Delta_{13}\mathrm d\Delta_{23} \min(\Delta_{12},\Delta_{13},\Delta_{23})\exp\left(-\frac2n\left(\Delta_{12}^2+\Delta_{13}^2+\Delta_{23}^2\right)\right) \\ &=& \frac n2-\frac34\sqrt{\frac n\pi}\;, \end{eqnarray} where the last integral is evaluated as below. This treatment should lend itself more readily to generalization to higher $N$. Original answer: Without loss of generality fix the first string to be all zeros. The second one has probability $2^{-n}\binom nm$ to have $m$ ones, and thus to have Hamming distance $m$ from the first string. The third one has probability $$ 2^{-n}\binom mk\binom{n-m}l $$ to have $k$ zeros where the second string has ones and $l$ ones where the second string has zeros, and thus to have Hamming distance $k+l$ from the second string and $m-k+l$ from the first string. Thus the mean minimum distance is $$ 2^{-2n}\sum_{m=0}^n\sum_{k=0}^m\sum_{l=0}^{n-m}\binom nm\binom mk\binom{n-m}l\min\left(m,k+l,m-k+l\right)\\=2^{-2n}\sum_{m=0}^n\sum_{k=0}^m\sum_{l=0}^{n-m}\frac{n!}{k!(m-k)!l!(n-m-l)!}\min\left(m,k+l,m-k+l\right)\;.$$ For large $n$, all three distances will be close to $\frac n2$, so $m\approx\frac n2$ and $k\approx\frac n4$, $l\approx\frac n4$. We can approximate the factorials and replace the bounded sums by unbounded integrals to obtain $$ 2^{-2n}\int_{-\infty}^\infty\mathrm dm\int_{-\infty}^\infty\mathrm dk\int_{-\infty}^\infty\mathrm dl\min\left(m,k+l,m-k+l\right)\frac{\sqrt{2\pi n}}{\sqrt{2\pi k}\sqrt{2\pi(m-k)}\sqrt{2\pi l}\sqrt{2\pi (n-m-l)}}\\\exp\left(n\log n-k\log k-(m-k)\log(m-k)-l\log l-(n-m-l)\log(n-m-l)\right)\;. $$ With $m=\left(\frac12+\mu\right)n$, $k=\left(\frac14+\kappa\right)n$ and $l=\left(\frac14+\lambda\right)n$ this is $$ 2^{-2n}\left(\frac n{2\pi}\right)^\frac32\int_{-\infty}^\infty\mathrm d\mu\int_{-\infty}^\infty\mathrm d\kappa\int_{-\infty}^\infty\mathrm d\lambda \left(\frac12+\min\left(\mu,\kappa+\lambda,\mu-\kappa+\lambda\right)\right)n \\ \frac1{\sqrt{\frac14+\kappa}\sqrt{\frac14+\mu-\kappa}\sqrt{\frac14+\lambda}\sqrt{\frac14-\mu-\lambda}} \\ \exp\left(-n\left(\left(\frac14+\kappa\right)\log\left(\frac14+\kappa\right)+\left(\frac14+\mu-\kappa\right)\log\left(\frac14+\mu-\kappa\right)\right.\right. \\ \left.\left.+\left(\frac14+\lambda\right)\log\left(\frac14+\lambda\right)+\left(\frac14-\mu-\lambda\right)\log\left(\frac14-\mu-\lambda\right)\right)\right) \\ \approx \frac n2+n\cdot2^4\left(\frac n{2\pi}\right)^\frac32\int_{-\infty}^\infty\mathrm d\mu\int_{-\infty}^\infty\mathrm d\kappa\int_{-\infty}^\infty\mathrm d\lambda\min\left(\mu,\kappa+\lambda,\mu-\kappa+\lambda\right) \\ \exp\left(-2n\left(\kappa^2+(\mu-\kappa)^2+\lambda^2+(\mu+\lambda)^2\right)\right) $$ (where we can take the square roots in the denominator to be constant because their linear terms cancel). This is a Gaussian integral with covariance matrix $$ 4n\pmatrix{2&-1&1\\-1&2&0\\1&0&2}\;, $$ which has eigenvalues $4n\left(2+\sqrt2\right)$, $4n\cdot2$ and $4n\left(2-\sqrt2\right)$ and corresponding orthonormal eigenvectors $\left(\frac1{\sqrt2},-\frac12,\frac12\right)$, $\left(0,\frac1{\sqrt2},\frac1{\sqrt2}\right)$ and $\left(-\frac1{\sqrt2},-\frac12,\frac12\right)$. We can check at this point that the integral without the minimum Hamming distance is $1$, so the approximations have preserved the normalization. By symmetry, we can evaluate the part of the integral where the minimum is $\mu$ and multiply by $3$. With the transformation $$ \pmatrix{\mu\\\kappa\\\lambda}=\pmatrix{ \frac1{\sqrt2}&0&-\frac1{\sqrt2}\\ -\frac12&\frac1{\sqrt2}&-\frac12\\ \frac12&\frac1{\sqrt2}&\frac12 } \operatorname{diag}\left(4n\left(2+\sqrt2\right),4n\cdot2,4n\left(2-\sqrt2\right)\right)^{-\frac12} \pmatrix{x\\y\\z} $$ that transforms the covariance matrix to the identity, the boundary planes $\mu\lt\kappa+\lambda$ and $\mu\lt\mu-\kappa+\lambda$, that is, $\kappa\lt\lambda$, become $$\sqrt{2-\sqrt2}\cdot x-\sqrt{2+\sqrt2}\cdot z\lt2y$$ and $$\sqrt{2-\sqrt2}\cdot x+\sqrt{2+\sqrt2}\cdot z\gt0\;,$$ respectively. The $\mu$ in the integrand becomes $\frac1{4\sqrt n}\left(\sqrt{2-\sqrt2}\cdot x-\sqrt{2+\sqrt2}\cdot z\right)$. It makes sense to rotate to $$ \pmatrix{u\\v}=\frac12\pmatrix{ \sqrt{2-\sqrt2}&-\sqrt{2+\sqrt2} \\ \sqrt{2+\sqrt2}&\sqrt{2-\sqrt2} }\pmatrix{x\\z} $$ so that the bounds are $u\lt y$ and $u\lt v$, respectively, and the factor $\mu$ in the integrand is $\frac u{2\sqrt n}$. Note that the bounds are now manifestly symmetric and include the third of the solid angle in which $u$ is the least of $u,v,y$. We can now evaluate the integral in spherical coordinates $y=r\cos\theta$, $u=r\sin\theta\cos\phi$ and $v=r\sin\theta\sin\phi$: \begin{eqnarray} && n\cdot(2\pi)^{-\frac32}\iiint\limits_{u\lt\min(v,y)}\frac u{2\sqrt n}\mathrm e^{-\frac12\left(u^2+v^2+y^2\right)}\mathrm du\,\mathrm dv\,\mathrm dy \\ &=& \frac{\sqrt n}2\cdot(2\pi)^{-\frac32}\int_0^\infty\mathrm e^{-\frac12r^2}r^3\mathrm dr\int_{\frac\pi4}^{\frac{5\pi}4}\int_0^{\operatorname{arccot}\cos\phi}\sin^2\theta\mathrm d\theta\cos\phi\mathrm d\phi \\ &=& \frac{\sqrt n}2\cdot(2\pi)^{-\frac32}\int_0^\infty\mathrm e^{-\frac12r^2}r^3\mathrm dr\int_{\frac\pi4}^{\frac{5\pi}4}\frac12\left(\operatorname{arccot}\cos\phi-\frac{\cos\phi}{1+\cos^2\phi}\right)\cos\phi\mathrm d\phi \\ &=& \frac{\sqrt n}2\cdot(2\pi)^{-\frac32}\int_{\frac\pi4}^{\frac{5\pi}4}\left(\operatorname{arccot}\cos\phi-\frac{\cos\phi}{1+\cos^2\phi}\right)\cos\phi\mathrm d\phi \\ &=& \frac{\sqrt n}2\cdot(2\pi)^{-\frac32}\left(\pi\left(1-\frac3{\sqrt2}\right)-\pi\left(1-\frac1{\sqrt2}\right)\right) \\ &=& -\frac14\sqrt\frac n\pi\;. \end{eqnarray} We need to multiply this by $3$ and add it to the main term $\frac n2$ to obtain $$ \boxed{\frac n2-\frac34\sqrt\frac n\pi}\;. $$ Here’s code that performs a simulation for $n=64$ to check the result. The simulation yields a mean minimum Hamming distance of $28.575$, compared to $$ \frac{64}2-\frac34\sqrt\frac{64}\pi=32-\frac6{\sqrt\pi}\approx28.615 $$ from the asymptotic analysis.
Topology of the product $[0,1) \times (0,1]$
Adapted from https://proofwiki.org/wiki/Projection_on_Real_Euclidean_Plane_is_not_Closed_Mapping: Take $\mathbb R^2$ with the usual topology, and the set $S=\{(x,y)\in\mathbb R^2\mid xy=1\}$. $S$ is obviously closed in $\mathbb R^2$, but its projection on the $x$ axis: $(-\infty,0)\cup(0,\infty)$ is not.
Does $-x^2$ does mean $-(x^2)$ or $(-x)^2$?
If it helps, think of it as $0-x^2$. The Exponent wins first, then the Subtraction. Hence it means $-(x^2)$.
If $f,g: [0,1] \rightarrow \mathbb{R} \ $ are differentiable with $f'(x)g(x) = g(x)f'(x)$, does there exist $c \in (0,1)$ such that $g(c) = 0$?
Edited The issue is that such functions don't exists. We can see this by slightly modifying your argument: Indeed, as in your proof, define $h: (0,1) \to \mathbb R$ via $h(x)= \frac{g(x)}{f(x)}$. Then, $h'(x)=0$ which implies that there exists a constant $C$ such that $$g(x)=C f(x) \forall x \in (0,1) \,.$$ But this is impossible, as the continuity of $f,g$ at $x=0$ implies $$g(0)=\lim_{x \to 0^+} g(x)= \lim_{x \to 0^+} C f(x)= C \lim_{x \to 0^+} f(x)=C f(0)=0$$ which is a contradiction.
Induction proof of contest math problem
Yep, that looks like good work. I would personally write $m(n)$ instead of just $m$ when relating to $m(n + 1)$, but the logic is good. I like the way that you've constructed $m$ to be square itself!
what does E[$c^x$] mean in probability
EDIT: In general, for any function $f(x)$, we have $$ E[f(x)]=\sum_{i}f(i)P(x=i). $$ END EDIT Expanding on the comment above, $$ E[c^x]=c^1P(x=1)+c^{-1}P(x=-1)=pc+\frac{1-p}{c}. $$ Thus if we have $E[c^x]=1$, we want $$ pc+\frac{1-p}{c}=1 $$ or $$ pc^2+(1-p)=c $$ or $$ pc^2-c+(1-p)=0. $$ The quadratic formula can now be used to solve for $c$ in terms of $p$.
Topology of the complement of a countable union of divisors
Here is a proof of surjectivity. (I am assuming that $U$ is open and connected in the classical topology.) First of all, equip $U$ with a complete Riemannian metric $g$ and let $d$ denote the associated distance function. (You obtain $g$ by multiplying your favorite Riemannian metric on ${\mathbb C}P^n$ by a positive scalar function on $U$ which diverges to $\infty$ at least quadratically as you approach the boundary of $U$.) Next, given a loop $L$ in $U$ (all loops will be based at a point $x\in U'$) you can approximate $L$ by a polygonal loop $P$ with $k$ vertices so that $P$ represents the same based homotopy class as $L$. (Here "polygonal" can be defined, for instance using circular arcs contained in projective lines in ${\mathbb C}P^n$.) Let $Pol$ denote the space of polygonal loops in $U$ based at $x$ with $k$ vertices and base-homotopic to $L$. I will equip $Pol$ with the Hausdorff distance defined via the metric $d$ on $U$. I will leave it to you to prove that $Pol$ is a complete metric space. It follows from Sard's theorem or from Kleiman transversality, whatever you prefer, that for each divisor $D$ in ${\mathbb C}P^n$, the subset $Pol(D)$ consisting of loops in $Pol$ which are disjoint from $D$, is open and dense. Now, given a countable collection $\{D_j: j\in J\}$ of divisors, it follows from the Baire category theorem (applied to $Pol$) that the intersection $$ \bigcap_{j\in J} Pol(D_j) $$ is still dense in $Pol$. Hence, there exists a polygonal loop $P$ based at $x$ and contained in $$ U'= U - \bigcup_{j} D_j $$ such that $P$ represents the same element of $\pi_1(U,x)$ as $L$. Thus, $\pi_1(U',x)\to \pi_1(U,x)$ is surjective.
Lowest Common Multiple / Aggregation of Weights
Let the total weight be $22\times29\times3\times20=38280$. $35$ percent of $38280$ is $7\times22\times29\times3$, so each $A$-indicator should have weight $7\times29\times3$. $10$ percent of $38280$ is $22\times29\times3\times2$, so each $B$-indicator has weight $11\times29\times3$. And so on.
If $u_n \to u$ in $L^2(\Omega)$, and $u_n \in L^\infty(\Omega)$, is $u \in L^\infty(\Omega)$?
Hint: Construct a counterexample. Take $u$ to be something unbounded but square integrable (say, $1/x^{1/4}$ in [-1,1]), and take $u_n$ to be truncations at the singularity.
What does it mean to not be able to take the derivative of a function multiple times?
As an example: $f(x)=6|x|$ does not have a derivative at $x=0$ $g(x)=3x|x|$ does have a first derivative everywhere of $f(x)$ but not a second derivative at $x=0$ $h(x)=x^2|x|= |x^3|$ has a first derivative everywhere of $g(x)$ and a second derivative of $f(x)$ but not a third derivative at $x=0$
How to show the set $\operatorname{Hom}_K(L,\bar{K})$ of all $K$-embeddings of $L$ is partitioned into $m$ equivalence classes of $d$ elements each?
I guess because the degree of the minimal polynomial $p(x)$ of $x$ over $K$ is $m$, then $p(x)$ has $m$ different roots $a_1,\cdots,a_m$ in $\bar K$, we can let $\sigma x=a_i(i=1,\cdots,m)$. But How to show there exist $d$ different $K$-embeddings sending $x$ to $a_i$ for each $i$?
Check if tautology (w/o truth table)
The lefthand side should give you $(A+BC)(B+C)$, not $A+(BC)(B+C)$. That in turn expands to $AB+AC+BC+BC=AB+AC+BC$ when you ‘multiply it out’, which is exactly what you want. Your $A(B+C)+BC$, though immediately obtainable from $A+(BC)(B+C)$, is actually obtainable from $(A+BC)(B+C)$, so apparently you were working with the correct parenthesization even if you didn’t write it. In any case, with $A(B+C)+BC$ you’re essentially done: it immediately expands to $AB+AC+BC$.
Notation Involving Sobolev Spaces I Cannot Find the Meaning Of
The solution of the PDE you are trying to solve is a function $$u : [0,T] \times \Omega \to \mathbb{R}.$$ In other words, given a time and a spatial coordinate, you get a value. By currying, this is equivalent to a function $U$ from $[0,T]$ (the range) to functions from $\Omega$ to $\mathbb{R}$. From this point of view, for any time $t$, you have a function $U(t)(\cdot) = u(t, \cdot)$ on $\Omega$. The notation $W^{1,p} (0,T; \mathbb{L}^2 (\Omega))$ is coherent with this second point of view: the indices $0$, $T$ correspond to the domain of $U$, while $\mathbb{L}^2 (\Omega)$ is its range. In other words, $U$ is seen as a function from $[0,T]$ taking its values in $\mathbb{L}^2 (\Omega)$. The derivative of $U$ is then the partial derivative of $u$ with respect to time. To expand things a little in your example (a reference would be nice to check that I did not make a mistake here): for (almost?) any $t \in [0,T]$, the function $u(t, \cdot)$ belongs to $\mathbb{L}^2 (\Omega)$: $$\left( \int_\Omega |u(t,x)|^2 \ dx \right)^{\frac{1}{2}} < +\infty.$$ the function $U: t \mapsto u(t, \cdot)$ belongs to $\mathbb{L}^1 (0,T; \mathbb{L}^2 (\Omega))$: $$\int_0^T\left( \int_\Omega |u(t,x)|^2 \ dx \right)^{\frac{1}{2}} \ dt< +\infty.$$ the function $U: t \mapsto u(t, \cdot)$ belongs to $W^{1,p} (0,T; \mathbb{L}^2 (\Omega))$: $$\left( \int_0^T\left( \int_\Omega |u(t,x)|^2 \ dx \right)^{\frac{p}{2}} \ dt \right)^{\frac{1}{p}} < +\infty \\ \text{ and } \\ \left( \int_0^T\left( \int_\Omega |\partial_t u(t,x)|^2 \ dx \right)^{\frac{p}{2}} \ dt \right)^{\frac{1}{p}}< +\infty.$$
false proof of $\root \of 4$ is irrational.
You begin your proof saying that $\sqrt4=\frac pq$ with $p$ and $q$ natural coprimes greater than $1$. That works in the case of $\sqrt2$, since $\sqrt2$ is not a natural number, and then, yes, if it could be written as $\frac pq$ with $p$ and $q$ natural coprime numbers, then both of them would have to be greater than $1$. But $\sqrt4$ is a natural number. And the assumption that $\sqrt4$ can be written as $\frac pq$ with $p$ and $q$ coprime and $p,q>1$ is false.
Problems on Normal Variable
The probability density for $N(0,\sigma)$ is $p_{\sigma}(x)=\frac{1}{\sqrt{2\pi}\sigma}e^\frac{-x^2}{2\sigma^2}$. Therefore signal probabiliity is $A$ is$P(A)=\frac{p_2(2)}{p_2(2)+p_3(2)}$.
Solving for $y$ in $y = x \ln(y)$
Here' you will want to use the other branch of the Lambert function, $W_{-1}(x)$. Recall that for $-1/e \le x < 0$, two of the infinitely many branches of the Lambert function take on real values: the principal branch $W(x)=W_0(x)$ and $W_{-1}(x)$. For $x \geq e$, then, your original expression can take the Lambert function to be any of those two real branches. Which branch to use would depend on your application, though...
Simple paper cuts problem
It should not be difficult to unfold this with your lines. Something like this Alternatively draw some lines on the the possible solutions
For a branch-defined function, analyze the uniform convergence of it and of its series
I think e) is false. In fact, $s(1+\frac 1 k) \to \sum_{n=2}^{\infty} \frac 1 {n^{2}}+1$ whereas $s(1) = \sum_{n=2}^{\infty} \frac 1 {n^{2}}$. Your argument for a) is correct. For c) use the fact that $f_n(x) \leq \frac x {n^{2}}$ for all $x \geq 0$. This also proves uniform convergence on $[0,a]$ by M-test. Second part of d) follows from b).
Prove if a graph does not contain $K_4$ minor, then it is 3-colorable.
This is a particular case of the hadwiger conjecture. The case for which $k=4$ was in fact proved by Hadwiger in the same paper in which he published his conjecture. The proof is simple by induction on the number of vertices, I paraphrase it from this article. Suppose there is a graph $G$ that is not $3$-colorable and does not contain $G$ as a minor. Select such a $G$ with the minimum number of vertices. Clearly $G$ is connected and must contain a circuit. pick an edge $v_1v_2$ on the circuit and let $X\neq\varnothing$ be a minimum vertex set seperating $v1$ and $v_2$. Notice that $X$ must be an independent set, as otherwise $G$ would contain $K_4$ as a minor. Let $G-e=G_1\cup G_2$ and $G_1\cap G_2=X$, such that $G_1$ and $G_2$ are connected and $v_1\in G_1,v_2\in G_2$. Form $G_i'$ from $G$ by contracting all edges in $G_i$ so that all vertices of $G_i$ are identified with $v_i$. Clearly both $G_1$ and $G_2$ are $3$-colorable. Suppose that $G_1'$ is $3$-colored with $u_1$ green and $u_2$ blue, and $G_2'$ ; is $3$-colored with $u_1$ red and $u_2$ green. Then these colorings induce a 3-coloring of $G$ with $u_l$ red, $u_2$ blue, and all vertices of X green.
Chance on winning by throwing a head on first toss.
Consider the winners of the triples. $S = \{\underbrace{HHH}_A, \underbrace{HHT}_A, \underbrace{HTH}_A, \underbrace{THH}_B, \underbrace{HTT}_A, \underbrace{THT}_B, \underbrace{TTH}_C, \underbrace{TTT}_{\text{repeat}}\}$ Now can you tell the probabilities of each winning?
How can you prove this metric space exercise?
Clearly any positive rescaling of a metric is again metric and satisfies all the axioms. However regarding the second part $k+d(x,x)=0\Rightarrow k=0$
Probability of B given not-A
First, from the equations: \begin{equation*} P(B | !A) = \frac{P(!A \cap B)}{P(B)} \, , \end{equation*} while \begin{equation*} 1 - P(B | A) = 1 - \frac{P(A \cap B)}{P(B)} = \frac{P(B) - P(A \cap B)}{P(B)} \, . \end{equation*} From the equations, the question comes down to whether \begin{equation*} P(!A \cap B) = 1 - P(A \cap B) \, . \end{equation*} This cannot hold in a couple of cases. If $A$ and $B$ are mutually exclusive/disjoint, for example, then $B \subseteq !A$ so that LHS = $P(B)$, while RHS = 1. Intuitively, the truth of $A$ ($P(B|A)$) means that $B$ must be false, but knowing that $A$ is false ($P(B|!A)$) does not guarantee that $B$ is true. In fact, the equation above, which is equivalent to your statement, states $B$ is always true ($P(B) = 1$): $!A \cap B$ is the complement of $A \cap B$, i.e. that exactly one of those is true. So if $!A \cap B$ is false, then $A \cap B$ is true, so that $B$ is true. If, on the other hand, $!A \cap B$ is true, then $B$ must be true. Thus, $P(B) = 1$.
For a linear transformation $f \colon V \to W$, $\dim \mathrm{Ker}\, f + \dim \mathrm{Im}\, f = \dim V$
The Dimension Theorem is often called the "Rank Nullity Theorem", so you may have more luck finding it under that name. The result holds for any linear transformation and vector spaces; it's not even restricted to finite dimensional spaces, so long as your sum is taken to be a sum of cardinals and you assume the Axiom of Choice (so that all vector spaces have bases). The Dimension Theorem, aka Rank Nullity Theorem, states: Dimension Theorem. If $\mathbf{V}$ and $\mathbf{W}$ are any vector spaces (over the same field), and $T$ is any linear transformation from $\mathbf{V}$ to $\mathbf{W}$, then $\dim(\mathrm{ker} T) + \dim(\mathrm{Im} T) = \dim(\mathbf{V})$ where the sum is a sum of cardinals. The idea is to start with a basis $\beta$ for $\mathrm{ker} T$, and then extend it to a basis $\beta\cup\gamma$ for all of $\mathbf{V}$. Then you show that the image of $\gamma$ under $T$ is linearly independent: if $\mathbf{v}_1,\ldots,\mathbf{v}_n\in\gamma$ are such that $$\alpha_1T(\mathbf{v}_1)+\cdots\alpha_nT(\mathbf{v}_n) = \mathbf{0}$$ in $\mathbf{W}$, then you can rewrite this as $$T(\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n) = \mathbf{0}$$ hence $\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n\in\mathrm{ker}(T)$. That means that this linear combination of vectors of $\gamma$ lies in the span of $\beta$, but since $\beta\cup\gamma$ is linearly independent, the only way this can happen is if $\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n=\mathbf{0}$; this is a linear combination of vectors in $\gamma$, which is linearly independent, so $\alpha_1=\cdots=\alpha_n=0$. This proves that $$T(\gamma) = \{T(\mathbf{v})\mid \mathbf{v}\in\gamma\}$$ is a linearly independent subset of $\mathbf{W}$. It is now easy to show that $\mathrm{Im}(T) = \mathrm{span}T(\beta\cup\gamma) = \mathrm{span}(T(\gamma))$, so that $\dim(\mathrm{Im}(T)) = |\gamma|$. So you have that $$\dim\mathbf{V} = |\beta\cup\gamma| = |\beta|+|\gamma| = \dim(\mathrm{ker}\; T) + \dim(\mathrm{Im}\; T),$$ as desired.
My proof of Bolzano's theorem
(Some people use $\mathbf{N}$ for natural numbers instead of the $\mathbb{N}$ symbol mentioned in the comments, too. $N$ is better, and something like $A$ would be even more better because $N$ often stands for a number $\in \mathbb{N}$, not a set of numbers, but that's a question of taste. Also, $\sup A$ (with \sup) instead of $sup A$.) I think the proof is correct. For what it's worth, instead of assuming $f(a) <0 $ and $f(b) >0$, with a slight modification it suffices to assume $f(a)$ and $f(b)$ have opposite signs: Let $a' \in \{a, b\}$ such that $f(a') = \min\{f(a), f(b)\}$, and likewise $b'$ such that $f(b') = \max\{f(a), f(b)\}$. Now clearly $f(a') < 0$ and $f(b') > 0$, and one can proceed as you've done, but working with $a'$ instead of $a$ and so on.
Change of the average when a number is removed
The sum of the thirty numbers is $n \cdot \overline x= \color{blue}{21} \cdot \overline x$, where $\overline x$ is the average of the thirty numbers. Edit: blue printed have been edited. Then subtract the largest number from the result. After that, divide the new result by (n-1).
How can be 1 is equal to 2?
The problem is here: $a^2=b^2 \Rightarrow a=b \underbrace{\vee}_{\text{or}} a=-b$. In this case, only the second one counts: $(1-\frac32)^2=(2-\frac32)^2 \Rightarrow \require{cancel} \cancel{1-\frac32=2-\frac32} \vee 1-\frac32=-2+\frac32$, but the first one gives a contradiction, so it doesn't count.
What's the intuition for why repeated div and mod converts a number to another base?
Maybe consider a simpler example to start with : Convert $20$ from base $10$ to base $2$ If it helps think of this example as counting $20$ apples in base $2$ system Step 1 : Count how many $2$-apple groups are there $\color{red}{20 = 2\times 10 + }\color{green}{0} $ This division tells us that the number of $2$-apple groups is $10$ and $1$-apple groups is $0$ Step 2 : Count how many $4$-apple groups are there $\color{red}{10 = 2\times 5+ }\color{green}{0}$ This division tells us that the number of $4$-apple groups is $5$, $2$-apple groups is $0$ and $1$-apple groups is $0$ Step 3 : Count how many $8$-apple groups are there $\color{red}{5 = 2\times 2 + }\color{green}{1}$ This division tells us that the number of $8$-apple groups is $2$, $4$-apple groups is $1$, $2$-apple groups is $0$ and $1$-apple groups is $0$ Step 4 : Count how many $16$-apple groups are there $\color{red}{2 = 2\times 1 +}\color{green}{0}$ This division tells us that the number of $16$-apple groups is $1$, $8$-apple groups is $0$, $4$-apple groups is $1$, $2$-apple groups is $0$ and $1$-apple groups is $0$ Step 5 : Count how many $32$-apple groups are there $\color{red}{1 = 2\times 0 + }\color{green}{1}$ This division tells us that the number of $32$-apple groups is $0$, $16$-apple groups is $1$, $8$-apple groups is $0$, $4$-apple groups is $1$, $2$-apple groups is $0$ and $1$-apple groups is $0$ Overall : $$\left(20\right)_{10} = \left(\color{green}{10100}\right)_{2}$$ As you can see, changing base is like changing the size by which you group things. I hope you might be having some idea about the place value!
What is the relationship between the "set system" in graph theory and the one in measure theory?
This is a matter of conventions and expectations. For example, the set system of a hypergraph may have the property that it is a connected hypergraph. There are other properties that a set system may satisfy. For example, the set system of a topological space is required to satisfy certain properties. An alternative approach is to start with a base set and then to define an indicator function on the power set of the base set. The indicator function has the value $1$ if and only if the subset is an element of the corresponding set system. Using this approach we can include other functions on the power set. For example, the rank function of a matroid is required to satisfy certain properties. Similarly, in a meaurable space, the measure function is required to satisfy certain properties.
Isomorphism of quotient ring of polynomial ring
One of the most important facts about finite fields is that all finite fields of the same size are isomorphic! Finding the isomorphism can be a little trickier. For a problem expressed like yours, the most direct approach would be something like finding a root of the polynomial $t^2 - 2$ in the field $\mathbf{F}_3[x] / (x^2 - 2x - 1)$. Aside: it sometimes helps to use different indeterminate variables to keep everything straight: e.g. let your two fields be $\mathbf{F}_3[x]/(x^2 - 2)$ and $\mathbf{F}_3[y]/(y^2 - 2y - 1)$.
How do I determine the transformation matrix T of the coordinate transformation from the base E to the base B?
Hint: The matrix $$ M= \begin{bmatrix} 1 & 0 & 2\\ 2 & -1 & 3\\ 4 & 1 & 8 \end{bmatrix} $$ represents the transformation: $$ \mathbf{e_1}\to \mathbf{b_1} \qquad \mathbf{e_2}\to \mathbf{b_2} \qquad \mathbf{e_3}\to \mathbf{b_3} $$ and its inverse: $$M^{-1} \begin{bmatrix} -11 & 2 & 2\\ -4 & 0 & 1\\ 6 & -1 & -1 \end{bmatrix} $$ represents the transformation: $$ \mathbf{b_1}\to \mathbf{e_1} \qquad \mathbf{b_2}\to \mathbf{e_2} \qquad \mathbf{b_3}\to \mathbf{e_3} $$ Use $M^{-1}$ to substitute $\mathbf{e_i}$ in the vector $\mathbf{v}$ and $M$ to substitute $\mathbf{b_i}$ in the vector $\mathbf{w}$ Note that the columns of $M$ are the vectors $\mathbf{b_i}$ in the standard basis, so $M\mathbf{e_i}=\mathbf{b_i}$. In the same way the columns of $M^{-1}$ are the vectors of the standard basis expressed in the basis $\mathbf{b_i}$. So, by linearity, your vector $\mathbf{v}$ that in the standard basis is $\mathbf{v}=2\mathbf{e_1}+\mathbf{e_2}+2\mathbf{e_3}$, in the basis $B$ is: $$ M^{-1}\mathbf{v}= \begin{bmatrix} -11 & 2 & 2\\ -4 & 0 & 1\\ 6 & -1 & -1 \end{bmatrix} \begin{bmatrix} 2\\ 1\\ 2 \end{bmatrix}= \begin{bmatrix} -16\\ -6\\ 9 \end{bmatrix} $$ and the vector $\mathbf{w}$ that in the basis $B$ is $[1,2,3]^T$ , in the standard basis is: $$ M\mathbf{w}= \begin{bmatrix} 1& 0 & 2\\ 2 & -1 & 3\\ 4 & 1 & 8 \end{bmatrix} \begin{bmatrix} 1\\ 2\\ 3 \end{bmatrix}= \begin{bmatrix} 7\\ 9\\ 30 \end{bmatrix} $$
About semipositive definite matrix
Let $$ A = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}, \quad B = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} $$ Then $$ A^2 = \begin{pmatrix} 5 & 3\\ 3 & 2\end{pmatrix}, \quad B^2 = \begin{pmatrix} 2 & 2 \\ 2 & 2\end{pmatrix} $$ We have $$ A - B = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad A^2- B^2 = \begin{pmatrix} 3 & 1 \\ 1 & 0\end{pmatrix} $$ We have $A-B \ge 0$, but $\det(A^2 - B^2) = -1 < 0$, that is $A^2-B^2 \not\ge 0$.
Can the sum of two measurable functions be non-measurable if they are valued in a general normed space instead of $ \mathbb{R} $?
Edit: As OP points out, this solution only works if $X$ is second-countable. Let $f, g: \mathbb{R} \to X$ be Borel measurable and $h(s) = f(s) + g(s)$. Consider mappings $$ \begin{align} \mathbb{R} \ni s & \stackrel{h_1}{\mapsto} (s, s) \in \mathbb{R}^2 \\[1ex] \mathbb{R}^2 \ni (s, t) & \stackrel{h_2}{\mapsto} (f(s), g(t)) \in X^2 \\[1ex] X^2 \ni (x, y) & \stackrel{h_3}{\mapsto} x+y \in X. \end{align} $$ Then $$s \stackrel{h_1}{\mapsto} (s, s) \stackrel{h_2}{\mapsto} (f(s), g(s)) \stackrel{h_3}{\mapsto} f(s) + g(s)$$ so $h = h_3 \circ h_2 \circ h_1$. But $h_1$ and $h_3$ are continuous hence Borel measurable and $h_2$ is Borel measurable. Thus so is $h$.
How can I write an algorithm to perform the following calculation exactly? (references accepted)
I have absolutely no experience using Haskell, but given a library supporting arbitrary precision floating point numbers and arbitrary size integers I'd do the following: Define a function $\tt{check}$ that takes an integer as input and a bool as output with $$ {\tt check}(r) = {\tt true} ~~~ \Leftrightarrow ~~~ 3^{3r+m} K^3 \ge (N + 3^r)^3, $$ i.e. a function that, as you put it, exactly compares the integer to the desired output. Compute the expression in floating point arithmetic with some starting precision, store the result as an arbitrarily large integer $r$. Check whether ${\tt check}(r) = {\tt true}$ and ${\tt check}(r-1) = {\tt false}$. If that is the case, $r$ is proven to be the correct output. If not, double the precision and repeat. I'm not sure if that counts as a non-terrible solution, but I think it should terminate relatively fast. Indeed, in most cases it should terminate on its first try, given a sensible starting precision. (I also assume that $3^{m/3}K-C > 0$, otherwise $\text{log}$ isn't well-defined anyway.) If you want to avoid floating point arithmetic altogether, your idea of doubling integers and then using binary search should be fast enough. You only need something like $4 \text{log}( <result> )$ checks to do this. I expect a floating-point-guess, integer-check method to be slightly faster, though.
Comparing complex numbers
We can define a partial order on $\Bbb C$ by $z_1\prec z_2$ if and only if $|z_1|<|z_2|$. We can define a total order on $\Bbb C$ in various ways--I'll give you a few if you're interested. We cannot give an order that is compatible with the operations on $\Bbb C$ so that $\Bbb C$ is an ordered field. If we could, would $i$ be positive or negative? Basically, it depends on how you want to define "bigger/smaller" in this instance. Give us more detail, and we can better answer your question.
If $a \equiv b \pmod n$ and $c+d = n$, does $ca+bd \equiv 0 \pmod n$?
If $a\equiv b\pmod n$, then this is just $$a(c+d) \equiv an\equiv 0\pmod n$$
Notation for the index of minimum value of several variables
maybe it is not the best way but it's a way and it would work for positive numbers : $$max({i*(1-sign(d_i - d_{min}))})$$ where the sign(x) is equal to -1 if x<0, zero if x=0 and +1 if x>0 1) each minus min 2) sign of each 3) 1 minus sign of each 5) each multiply by each index 4) max of all original problem: {20 15 8 12 42} 1) each minus min: {12 7 0 4 36} 2) sign of each: {1 1 0 1 1} 3) 1-sign: {0 0 1 0 0} 4) multiply to index: {0 0 3 0 0} max of all: 3
Is my Proof Correct: $(X, \mathcal{A}, \mu)$ $\sigma-$finite measure space then $\mu^{*}(B) < \infty$
Let's try inserting $B$ into your decomposition of $X$. Notice that if $X\subseteq \cup_{n}A_n$ as in your proof attempt, then since $B\subseteq X$, we can write $$X = B \cup \left(\bigcup_{n} (A_{n}\setminus B)\right)$$ Then $\mu (B)&lt;\infty$ since $\mu$ is $\sigma$-finite since, and $\mu (B) = \mu^{*} (B)$ since $B \in A^{*}$ (this last claim I am unsure about)
How find all postive integer number such $(n+k)\nmid \binom{2n}{n}$
If $k\le 0$ then for any odd prime $p&gt;-2k$ let $n=p-k$ then $2n&lt;3p$ so $p^2\|(2n)!$ and $(n+k) \not\mid \binom{2n}{n}$. Infinitely many choices of $p$ lead to infinitely many such $n$. If $k&gt;1$ then by Bertrand's postulate there is a prime $p&gt;2$ with $k&lt;p&lt;2k$. Then for any $t&gt;1$ let $n=p^t+(p-k)$, then $2n=2p^t+2(p-k)$, and since $0&lt;2(p-k)&lt;p$ it follows from Lucas's theorem that $\binom{2n}n \equiv 2\binom{2p-2k}{p-k}\not\equiv 0\pmod{p}$ and hence that $(n+k)\not\mid\binom{2n}n$. Infinitely many choices of $t$ lead to infinitely many such $n$. In the remaining case $k=1$ we always have $(n+1)\mid\binom{2n}n$. Since $\gcd(2n+1,n+1)=1$ $$ \frac{2n+1}{n+1}\binom{2n}{n} = \binom{2n+1}{n} \in \mathbb{Z} \implies (n+1)\left\vert \binom{2n}n\right. $$
What does it mean that a category satisfies right Ore condition?
It would be great if you could supply us with more context here. Are you interested in a category $\mathcal{C}$ satisfying the right Ore condition or rather a collection of morphisms $S\subset \text{Mor}(\mathcal{C})$ to satisfy the right Ore condition? These are somewhat different things. Following nlab, a category $\mathcal{C}$ satisfies the right Ore condtion if for every two morphisms $A\to B$ and $C\to B$, there exists an object $D$ and morphisms $D\to A$ and $D\to C$ yielding a commutative square. Any abelian category satisfies this condition and $D$ is simply the pullback of $A\to B\leftarrow C$. More generally, any additive category having all kernels also has all pullbacks and thus satisfies the right Ore condition. A slightly different animal is the following: Let $\mathcal{C}$ be a category and let $S\subseteq \text{Mor}(\mathcal{C})$ be a collection of morphisms. Then $S$ is said to satisfy the right Ore condition if for any morphism $A\to B$ and any morphism $s\colon C\to B$ with $s\in S$, there exists an object $D$ together with morphisms $D\to C$ and $t\colon D\to A$ such that $t\in S$ and $ABCD$ is a commutative square. In fact, $S$ is called a right multiplicative system if it is closed under compositions, satisfies the right Ore condition as above and one more axiom (see the definition of a right multiplicative system here). The point of a right multiplicative system $S$ in a category $\mathcal{C}$ is that you can build a new category $\mathcal{C}[S^{-1}]$ obtained by declaring all morphisms in $S$ to become isomorphisms. Thanks to the structure on $S$, the category $\mathcal{C}[S^{-1}]$ is relatively easy to understand. A decent example of the latter is the following: Let $\mathcal{A}$ be an abelian subcategory and let $\mathcal{S}$ be a Serre subcategory, i.e. given a short exact sequence $$A\to B\to C,$$ $B\in \mathcal{S}$ if and only if $A,C\in \mathcal{S}$. Define a system $S\subseteq \text{Mor}(\mathcal{A})$ by those morphisms $s$ such that $\ker(s),\text{coker}(s)\in \mathcal{S}$. Then $S$ is both a left and right multiplicative system. In fact, the localized category $\mathcal{A}[S^{-1}]$ is again abelian.
Show the left shift $T$ and $T^{-1} $ are measurable.
A few notes: I don't see a problem with showing measurability of $T^{-1}$ similarly to $T$, it should work analogously since $T$ appears to be bijective on $X$ in order to show that $\mu(A)= \mu(T^{-1}(A))$ for all $A\in B$, it suffices to show equality for all cylinder sets because they form an $\cap$-stable generator of $B$
How to transform a list of coordinates, making the first coordinate the origin?
Sure. For each item $(x, y, z)$ in your list (e.g., $(4, 5, 2), so $x = 5, y = 5, z = 2), take $(x, y, z) $ to $(x-5, y-6, z-1)$. (In the example, $(4, 5, 2)$ gets converted to $(-1, -1, 1)$.) If you want a matrix transform, using homogeneous coordinates, so that $(4, 5, 2)$ is represented by $$ \pmatrix{4\\5\\2\\1} $$ then the matrix form is $$ M = \pmatrix{1 &amp; 0 &amp; 0 &amp; -5 \\ 0 &amp; 1 &amp; 0 &amp; -6 \\ 0 &amp; 0 &amp; 1 &amp; -1 \\ 0 &amp; 0 &amp; 0 &amp; 1}. $$
Question concerning the group over GL$(n,\mathbb{Z})$
By replicating the Euclidean algorithm, we can find a matrix $P\in GL(n,\mathbb Z)$, which is the product of a permutation matrix and of transvection (shear) matrices corresponding to elementary row operations, such that $$ P\begin{bmatrix}a_1\\a_2\\\vdots \\a_n\end{bmatrix} = \begin{bmatrix}1\\0\\\vdots \\0\end{bmatrix} $$ Now, convince yourself that the matrix $P^{-1} \in GL(n,\mathbb Z)$ answers the problem. Example: the operations $$ \begin{bmatrix}15\\6\\10\end{bmatrix} \xrightarrow{\substack{L_1 := L_1 - 2L_2\\L_3 := L_3 - L_2}} \begin{bmatrix}3\\6\\4\end{bmatrix} \xrightarrow{\substack{L_2 := L_2 - 2L_1\\L_3 := L_3 - L_1}} \begin{bmatrix}3\\0\\1\end{bmatrix} \xrightarrow{\substack{L_1 := L_1 - 3L_3}} \begin{bmatrix}0\\0\\1\end{bmatrix} $$ lead to $$ P = \begin{bmatrix}0 &amp; 0&amp; 1\\0 &amp; 1 &amp; 0\\1 &amp; 0 &amp; 0\end{bmatrix} \begin{bmatrix}1 &amp; 0&amp; -3\\0 &amp; 1 &amp; 0\\0 &amp; 0 &amp; 1\end{bmatrix} \begin{bmatrix}1 &amp; 0&amp; 0\\-2 &amp; 1 &amp; 0\\-1 &amp; 0 &amp; 1\end{bmatrix} \begin{bmatrix}1 &amp; -2 &amp; 0\\0 &amp; 1 &amp; 0\\0 &amp; -1 &amp; 1\end{bmatrix} $$ and finally $$ P^{-1} = \begin{bmatrix}15 &amp; 2 &amp; 5\\6 &amp; 1 &amp; 2\\10 &amp; 1 &amp; 3\end{bmatrix} $$
Deriving Euler's theorem from Fermat's little theorem
The statement that $a^p\equiv a\mod p$ is the same as $a^{p-1}\equiv 1\mod p$ when $a$ and $p$ are relatively prime, because in this case we can divide both sides of the congruence by $a$, and obtain one from the other. Euler's theorem says that $$a^{\phi(m)}\equiv 1\mod m,$$ where $\phi(m)$ is the number of integers less than $m$ and relatively prime to $m$. For a prime, this is exactly $p-1$, so Fermat's Little Theorem is a special case of Euler's theorem.
Show that a lower semi continuous function composed with a continuous function is lower semi continuous
Let's also state a localised version of lower semicontinuity in terms of neighbourhoods: Definition: Let $X$ a topological space, and $f \colon X \to \mathbb{R}$ a function. Then $f$ is lower semicontinuous at $x$ if for every $a &lt; f(x)$ the set $f^{-1}(]a,+\infty[)$ is a neighbourhood of $x$. Then we can show that a function $f \colon X \to \mathbb{R}$ satisfies "$f^{-1}(],a+\infty[)$ is open for every $a \in \mathbb{R}$" if and only if it is lower semicontinuous at $x$ (in the sense of the definition above) for every $x\in X$. For, if $f^{-1}(]a,+\infty[)$ is open, and $a &lt; f(x)$, then $f^{-1}(]a,+\infty[)$ is a neighbourhood of $x$. And conversely, if $f$ is LSC at every $x$, then $f^{-1}(]a,+\infty[)$ is a neighbourhood of every $x\in X$ with $f(x) &gt; a$ - but that is exactly $f^{-1}(]a,+\infty[)$, which thus is open. Then we need to see that the two pointwise definitions are equivalent. If $(x_{\alpha})$ is a net converging to $x$, and $a &lt; f(x)$, then there is an $\alpha_0$ such that $x_{\alpha} \in f^{-1}(]a,+\infty[)$ for all $\alpha \geqslant \alpha_0$, since $f^{-1}(]a,+\infty[)$ is a neighbourhood of $x$. Hence $\liminf f(x_{\alpha}) \geqslant a$. Since that holds for every $a &lt; f(x)$, it follows that $\liminf f(x_{\alpha}) \geqslant f(x)$. Conversely, if $f$ is not lower semicontinuous at $x$ in the sense of the definition above, there is an $a &lt; f(x)$ such that $f^{-1}(]a,+\infty[)$ is not a neighbourhood of $x$. Hence for every neighbourhood $U$ of $x$ we can choose a point $x_U \in U \setminus f^{-1}(]a,+\infty[)$. The family of neighbourhoods of $x$ is a directed set if (partially) ordered by reversed inclusion (i.e. $U \leqslant V \iff V \subset U$), and thus we have a net $(x_U)$ converging to $x$. But by construction, $f(x_U) \leqslant a$ for all $U$, so $\liminf f(x_U) \leqslant a$, so $f$ is also not LSC at $x$ in the sense of the net-definition.
Capacity of Z Channel
There is a common abuse of notation here: $H(\cdot)$ can mean two very different things. $H(X)$ (the argument is a random variable) means the entropy of a random variable $X$ (with some given density). $H(a)$, when the argument is a scalar (more specifically, a real number in $[0,1]$), means the "binary entropy", that is, the entropy of a Bernoulli variable with paramenter $a$ ($P[X=1]=a)$. Hence $H(a) = -a \log(a) -(1-a) \log(1-a)$ In the deduction , $Y$ is a Bernoulli variable, hence $H(Y)=H(p)$ where the two sides of the equation use (perhaps confusingly) the two different notations, the right side is the binary entropy, and $p=P(Y=1)$ Update: Using (for avoiding confusion) the notation $H(X)$ for the entropy of a random variable and $h_b( p)$ the binary entropy function. In the Z channel case we have a random variable $Y$ (output) which is a Bernoulli variable . Hence $H(Y) = h_b(t)$ where $t=P(Y=1)$ (I use the variable name $t$ to avoid -another- confusion with $p=$ probability of channel crossover). Now, $$t=P(Y=1)= P(X=1 \cap {\rm no crossover})=(1-\alpha)(1-p)$$ Hence $$H(Y)= h_b(t)=h_b((1-\alpha)(1-p))$$
What is an n-oriented graph?
A $2$-oriented graph is a $2$-labelled (each edge has one of two labels from some labelset $L_2=\{\alpha,\beta\}$), oriented graph, such that each vertex has degree four, with the indegree of $\alpha$ edges being one and the outdegree of $\alpha$ edges being one at each vertex, and similarly for $\beta$. An $n$-oriented graph is similarly defined with the labelset $L_n$ having cardinality $n$, and such that each vertex has degree $2n$ and has exactly one ingoing and one outgoing edge of each label from the labelset $L_n$. (For the purposes of this definition, a loop counts as both an ingoing and outgoing edge of the same label). It is an exercise to show that each graph $G$ with constant vertex degree $2n$ can be $n$-oriented (Hint: show that $G$ admits an Eulerian circuit)
A continuous onto function from $[0,1)$ to $(-1,1)$
I suggest to start by drawing: draw the bounding box $[0,1)\times (-1,1)$, place your pencil at $(0,0)$, and trace different functions. The following are observations and hints, not logical proofs. A first idea is that, since you want a continuous function, a monotonous one might be troublesome in mapping a semi-open interval like $[0,1)$ to an open one like $(-1,1)$. You can find one location in $[0,1)$ where $f(x)$ approaches $ 1$ and another location in $[0,1)$ where $f(x)$ approaches $ -1$. If these locations are in some $[\epsilon,1-\epsilon]$ ($\epsilon &gt;0$), continuity might impose that the values $-1$ or $1$ will be reached strictly inside $[0,1)$, which you do not want. One remaining choice is that values $y=-1$ and $y=1$ are both approached on an open end like $\left[.~ 1\right)$. So you might need a function that oscillates infinitely often close to $x=1$. In other word, it can be wise to open $[0,1)$ to something like $[0,+\infty)$. To summarize, three main ingredients ingredients could be useful, in the shape of a fellowship of continuous functions that you could compose. Many choices are possible, here is one (borrowing from Tolkien's Ring poem): three functions for the unit-interval $[0,1)$ up to the sky $[0,+\infty)$ (hereafter $f_0$, $f_1$, $f_\phi$), one for the oscillation lords in their infinite $[0,+\infty)$ hall of sound (hereafter $f_2$), one function to bring them all and in $]-1,1[$ bind them (hereafter $f_3$), in the Land of functions where continuity lies (indeed, continuous functions tend to cast continuous shadows to connected intervals). The first ingredient $f_1(x)$ is easy with functions you know, defined on some $[a,b[$ with a singularity at $b$. It is easy to remap $[a,b[$ to $[0,1[$, so let us stick to that interval $[0,1[$. Examples: $\frac{1}{1-x}$, $-\log (1-x)$, $\tan (\pi x/2)$, and many more. If you want more flexibility, you can start with any function $f_0$ that maps $[0,1[$ to $[0,1[$: $x^p$ with $p&gt;0$, $\sin(\pi x/2)$, $\log(1+x(e-1))$. For the second ingredient $f_2$ on $[0,+\infty)$, the sine is a nice pick, and a lot of combinations of sines and cosines, like a chirping sound. But you have a lot of fancy alternatives. And you can easily plug in a function $f_\phi$ that maps $[0,+\infty)$ to $[0,+\infty)$ (for instance $\exp(x)-1$, $x^p$). The choice of $f_2$ is possibly the most sensitive, since you will need to strictly bound it afterward inside $ (−1,1)$, therefore you will need a third ingredient: a function $f_3$ that compensates (as a product for instance) the envelope of $f_2$ so that the result does not exceed $1$ in absolute value. So for the sine, $x^p$ or $\frac{\exp{x}-1}{e-1}$ will do the job. A function whose magnitude is strictly less than $1$, and tends to $1$ as $x\to 1$ is likely to work. Finally, a function can be obtained by composing $f(x)= f_3(x)\times f_2\left( f_\phi\left( f_1\left( f_0\left( x\right)\right)\right)\right)$. In your case, you have for instance $f_0(x)=x$, $f_1(x)=\frac{1}{1-x}$, $f_\phi(x)=x$, $f_2(x)=\sin(x)$, $f_3(x)=x^2$. While not fully generic, you can cook of lot of recipes with those ingredients. For instance (see image below):$$f(x)=\sin(\pi x^7/2)\sin \left( \tan \left(\pi \sqrt{x}/2\right)\right)\,.$$
How to show inductively that $(2n)! > (n!)^2$
Use $$(2n)! = n! (n+1)(n+2)\cdots(2n)&gt; n! \cdot 1\cdot 2 \cdots n = (n!)^2.$$ Alternatively, if you know binomial coefficients, $$ (2n)! = \binom{2n}{n} (n!)^2 &gt; (n!)^2, $$ since $\binom{2n}{n}$ is an integer $&gt; 1.$
an open ball in $\mathbb{R^n}$ is connected
An easy way to see this : a convex set is by construction path-connected, since $\forall x, y \in C$ with $C$ convex, $\lambda x + (1-\lambda)y \in C$ by convexity (so that you can choose the line between $x$ and $y$ as a path). Therefore since the unit ball is convex (show it if you wish to), and since path-connected sets are also connected, you're done. I'm not quite sure your proof is correct though. Why would $d(a,b&#39;) &gt; r$? I don't see an explicit reason for this. Hope that helps,
Antiderivative of $e^{x^3}$
I am sorry to tell you that there is no simple antiderivative for this expression. You have to do it in numerical methods such as Taylor series.
Fourier Transform of (Dirac delta*f(x))
An initial simplification : $$\delta(x)f(x) \ \ \text{is identical to} \ \ f(0)\delta(x).$$ (see Page 4 of https://www.reed.edu/physics/faculty/wheeler/documents/Miscellaneous%20Math/Delta%20Functions/Simplified%20Dirac%20Delta.pdf with $a=0$) Knowing that FT$(\delta(x))=1$, by linearity of the FT, $$\text{FT}(f(0)\delta(x))=f(0).$$
Distributional limit of a sequence of Dirac delta
I'm not sure if one can see directly that the sum is the right Riemann sum (I guess it depends a bit on the defintions). But we can do the following: fix $\varepsilon&gt;0$. Since $\phi$ is continuous on $[a,b]$, it is uniformly continuous. So there exists $\delta&gt;0$ with $|\phi(x)-\phi(y)|&lt;\varepsilon$ whenever $|x-y|&lt;\delta$. Then, if $n&gt;1/\delta$, \begin{align} \left|\frac1n\,\phi\left(\frac kn\right)-\int_{k/n}^{(k+1)/n}\phi(t)\,dt\right| &amp;=\left|\int_{k/n}^{(k+1)/n}(\phi(k/n)-\phi(t))\,dt \right|\\ \ \\ &amp;\leq\int_{k/n}^{(k+1)/n}|\phi(k/h)-\phi(t)|\,dt\\ \ \\ &amp;\leq\frac{\varepsilon}n. \end{align} Thus \begin{align} \left| \frac 1n \sum_{k=-2n}^{5n} \phi\left(\frac kn\right) -\int_{-2}^5\phi(t)\,dt \right| &amp;=\left|\sum_{k=-2n}^{5n}\int_{k/n} ^{(k+1)/n}\left[ \phi\left(\frac kn\right) -\phi(t)\right]\,dt\right|\\ \ \\ &amp;\leq\sum_{k=-2n}^{5n}\frac{\varepsilon}{n}=7\varepsilon. \end{align} As $\varepsilon$ was arbitrary, we get that $$ \lim_{n \to \infty} \frac 1n \sum_{k=-2n}^{5n} \phi\left({\frac kn}\right) =\int_{-2}^5\phi(t)\,dt. $$
For a Borel function when does there exist a set of full measure with measurable image
My comments above assumed a general measure space with $\mu(X)$ not necessarily 1. The case $\mu(X)=1$ is a bit easier. Assume: $X$ is a measure space with sigma algebra $\Sigma$ and measure $\mu$. $\mu(X)=1$ $f:X\rightarrow X$ is a measurable and measure-preserving function. Definition: Let's say that Property P holds if there is a set $A \in \Sigma$ such that $\mu(A)=1$ and $f(A) \in \Sigma$. Claim 1: Suppose $f(X) \in \Sigma$. Then Property P holds. Proof: Define $A=X$. Then $A \in \Sigma$, $\mu(A)=1$, and $f(A) = f(X) \in \Sigma$. So Property P holds. $\Box$ Claim 2: Suppose the sigma algebra is complete, so that all subsets of measure-0 sets also have measure 0. Then Property P holds if and only if $f(X) \in \Sigma$. Proof ($\Longleftarrow$): Suppose $f(X) \in \Sigma$. Then Claim 1 ensures Property P holds. $\Box$. Proof ($\Longrightarrow$): Suppose Property P holds. Let $A$ be a set such that $A \in \Sigma$, $\mu(A)=1$, $f(A)\in \Sigma$. We know: $$ A \subseteq f^{-1}(f(A)) \quad (*) $$ and since $f(A)$ is measurable we have $f^{-1}(f(A))$ is measurable and $$ 1= \mu(A) \overset{(a)}{\leq} \mu(f^{-1}(f(A))) \overset{(b)}= \mu(f(A)) \overset{(c)}{\leq} 1$$ where (a) holds by (*); (b) holds by the measure-preserving property of $f$; (c) holds because $f(A) \subseteq X$ and $\mu(X)=1$. It follows that $\mu(f(A))=1$ and so $$ \mu(f(A)^c) = 0$$ But $f(X)^c \subseteq f(A)^c$ and so $\mu(f(X)^c)=0$ (by completeness). Thus, $f(X)^c \in \Sigma$ and so $f(X)\in \Sigma$. $\Box$ Examples 1) If we use $X=\{1, 2, 3, 4\}$, $\Sigma = \{\phi, X, \{1, 2\}, \{3, 4\} \}$, $f(1)=f(2)=2, f(3)=f(4)=4$, $\mu(\{1,2\})=\mu(\{3,4\})=1/2$, then $\mu(X)=1$, $f$ is measurable and measure-preserving, this is a complete measure space, but Property P does not hold because $f(X) = \{2, 4\} \notin \Sigma$. 2) Same $X$ and $\Sigma$ as before, but define $f:X\rightarrow X$ by $f(1)=1, f(2)=2, f(3)=4, f(4)=4$. Define $\mu(\{1,2\})=1, \mu(\{3,4\})=0$. This is a non-complete measure space with $\mu(X)=1$, $f$ is measurable and measure-preserving, and Property P holds with respect to the set $A = \{1, 2\}$. However, $f(X)$ is not measurable. Now, if we complete the measure by adding the null sets $\{3\}$ and $\{4\}$ to $\Sigma$, then indeed $f(X)$ is measurable with measure 1.
Is every differentiable function on $(0,1)$ uniformly continuous $?$
If we consider $$f(x)=\frac{1+\sin(1/x)}{2},$$ we get a differentiable function that is not uniformly continuous.
How to get a position vector
Perhaps you overlooked the obvious: If $P$ has coordinates $(x,y,z)$, then the vector whose tail is at the origin and whose tip is at $(x,y,z)$ is simply $x\,i+y\,j+z\,k$ where $i,j,k$ are the standard basis vectors in $\mathbb{R}^3$. This quickly generalizes to $\mathbb{R}^n$: if $(x_1,x_2,\dots,x_n)$ is a point in $\mathbb{R}^n$, then the vector you seek is simply $x_1\,e_1+x_2\,e_2+\cdots+x_n\,e_n$ where $\{e_1,\dots,e_n\}$ are the standard basis vectors for $\mathbb{R}^n$.
Need help with Riemann sum
As you note, the points are chosen to form a geometric progression $x_i=aq^i$, with $q=\left(\dfrac ba\right)^{1/n}$. Thus we have $x_{i+1}-x_i=qx_i-x_i=(q-1)x_i$. The Riemann sum is $$\sum_{i=0}^{n-1}x_i^m(x_{i+1}-x_i)=(q-1)\sum_{i=0}^{n-1}x_i^{m+1}=(q-1)a^{m+1}\sum_{i=0}^{n-1}q^{i(m+1)}=(q-1)a^{m+1}\frac{q^{n(m+1)}-1}{q^{m+1}-1}\\ =\frac{q-1}{q^{m+1}-1}a^{m+1}\left(\left(\frac ba\right)^{m+1}-1\right).$$ The fraction can be simplified as $\dfrac1{q^m+q^{m-1}+\cdots1},$ which will tend to $\dfrac1{m+1}$ as $q\to1$. The final value is $$\frac{b^{m+1}-a^{m+1}}{m+1}.$$
image of injective function from $Z_{pqr} to Z_{pq}×Z_{qr}×Z_{rp}$
As the other answer indicates, your proof is not quite right. You are not trying to show that a single prime divides $x-y$, but that all three do. As for characterizing the image, suppose we have some triple $(a,b,c) \in Z_{pq}\times Z_{qr} \times Z_{pr}$. How can we check that it actually came from an element $x\in Z_{pqr}$? One approach is to compare, for example, the image of $a$ in $Z_p$ with the image of $c$ in $Z_p$. If there is such an $x$, these two things have to be equal. Do the same thing for $Z_q$ and $Z_r$ you'll have three conditions on $(a,b,c)$. To actually construct $x$ using these conditions, you can use the Chinese Remainder Theorem. More explicitly, consider the map $\psi(a,b,c) = (a-b, b-c, c-a) \in Z_q \times Z_r \times Z_p$. Prove that everything in the image of your map $\phi$ is in the kernel of $\psi$ (manipulate the definitions), and prove that everything in the kernel of $\psi$ is in the image of $\phi$ (requires the Chinese Remainder Theorem).