title_body
stringlengths
61
4.12k
upvoted_answer
stringlengths
20
29.9k
downvoted_answer
stringlengths
19
18.2k
prove no two cycles in a graph will share an edge if all the cycles in the graph are of odd length I am trying to use induction but cant make any headway on this problem. Was considering using the property that biparted graphs have no cycles of odd length but Im not sure if that helps
Suppose there are two (odd) cycles in your graph, say $C_1 = [u_1, u_2, \dots, u_n, u_1]$ and $C_2 = [v_1, v_2, \dots, v_m, v_1]$, that have k common edges. These edges lie between $k+1$ vertices in either cycle. Now, we know that $n$ and $m$ are odd, so $n=2a-1$ and $m=2b-1$, for some $a,b \in \mathbb{N}$. Therefore the number of vertices in the big cycle (the one obtained by forgetting the common edges) is equal to $n-(k-1))+m-(k-1) = 2a-1+2b-1-2k+2=2(a+b-k)$, which is an even number, a contradiction.
Suppose that we have two cycles $C_1$ and $C_2$ that share an edge. Without loss of generality, these cycles intersect in a path. Why? If not, then we have two points $v, w$ in both $C_1$ and $C_2$, and two paths $v, v_1, \ldots, v_n, w$ and $v, w_1, \ldots, w_m, w$ in $C_1$ and $C_2$ respectively. Consider the least $i$ such that $v_i \neq w_i$, and the least $j \ge i$ such that $v_j$ belongs on the other path (i.e. is equal to $w_k$ for some $k \ge i$). Then $$v_{i-1}, v_i, \ldots,v_j, w_{k-1},\ldots,w_{i-1}$$ is a cycle, where $v_{i-1} = w_{i - 1}$ is $v$ when $i = 1$, that intersects with $C_1$ (and $C_2$) in a path. So, we have three paths $P_1, P_2, P_3$ such that $C_1 = P_1 \cup P_2$ and $C_2 = P_2 \cup P_3$. Consider the cycle $C_3$ made up of $P_1$ and $P_3$. Then, \begin{align*} |C_1| &= |P_1| + |P_2| \\ |C_2| &= |P_2| + |P_3| \\ |C_3| &= |P_1| + |P_3|. \end{align*} Each $|C_i|$ is odd, but when we sum these expressions, $$|C_1| + |C_2| + |C_3| = 2(|P_1| + |P_2| + |P_3|),$$ which is even. This is a contradiction. Hence, the cycles cannot share an edge.
How to find Winding number Let $r:[0, 2\pi]\to\Bbb C$ be the circle given by $r(t)=1+2e^{2it}$, what is the winding number of $r(t)$ around $z=2$? I struck at : If $r(t)=1+2e^{it}$ then we can write as $|z-1|=2$, but How can write $r(t)=1+2e^{2it}$, in the circle form?
You can still write this as the circle described by $|z-1|=2$, the function just goes around the circle twice, since the exponent over $i$ ranges from $0$ to $4\pi$, which is twice $2\pi$
Just for fun, $\dfrac 1{2\pi i}\oint_r\dfrac {\operatorname dz}{z-2}=\dfrac 1{2\pi i}\int_0^{2\pi}\dfrac 1{1+2e^{2it}-2}4ie^{2it}\operatorname dt=\dfrac 1{2\pi i}\int_0^{2\pi}\dfrac 1{2e^{2it}-1}4ie^{2it}\operatorname dt=\dfrac 2{\pi}[-\dfrac 14\ln (2e^{2it}-1)]_0^{2\pi}=\dfrac{2\pi}{\pi}=2$, where at the end the principal branch of log was used. This checks out, since $r(t)$ goes around $2$ twice as $t$ goes from $0$ to $2\pi$. Note $r(t)$ is just the circle $2e^{2it}$ of radius $2$ centered at the origin, translated $1$ unit to the right.
Showing $\lim_{n \to \infty} m(E_n) = 0$, assuming $f > 0$ a.e. and $\lim_{n \to \infty} \int_{E_n}f \,dm =0$ Let $f:[0,1] \to \mathbb{R}$ be Lebesgue measurable with $f > 0$ a.e. Suppose that $\{E_n\}$ is a sequence of measurable sets in $[0,1]$ with the property that $\displaystyle \lim_{n \to \infty}\int_{E_n} f \,dm = 0$. Prove that $\displaystyle \lim_{n \to \infty} m(E_n) = 0$. This question is from an old analysis qual I am studying. So far I have tried a proof by contradiction: if $\displaystyle \lim_{n \to \infty} m(E_n) \neq 0$, then there is an $\epsilon > 0$ and a subsequence $\{n_k\}$ so that $m(E_{n_k}) \ge \epsilon$ for all $k$. I am trying to somehow use this subsequence and show $\displaystyle \lim_{k \to \infty}\int_{E_{n_k}} f \,dm \neq 0$, which would give me a contradiction. Another fact I know from my measure theory course is that, for meausurable $E \subseteq [0,1]$, the map $\displaystyle \nu(E) = \int_E f \,dm$ defines a measure on the Lebesgue measurable subsets of $[0,1]$. Will this fact be useful to me?
Here is a direct proof: Let $k,n \in \mathbb{N}$. Then $$\int_{E_n} f \, dm \geq \int_{E_n \cap \left[f>\frac{1}{k}\right]} f \, dm \geq \frac{1}{k} \cdot m \left( E_n \cap \left[f> \frac{1}{k} \right] \right) \geq 0$$ Since $\int_{E_n} f \, dm \to 0$ as $n \to \infty$ we obtain $$m \left( E_n \cap \left[f> \frac{1}{k} \right] \right) \to 0 \qquad (n \to \infty)$$ for all $k \in \mathbb{N}$. Thus $$m(E_n) \leq m \left( E_n \cap \left[f> \frac{1}{k} \right] \right) + m \left( \left[f > \frac{1}{k} \right]^c \right) \to m \left( \left[f > \frac{1}{k} \right]^c \right) \qquad (n \to \infty)$$ We have $$m \left( \left[f > \frac{1}{k} \right]^c \right) \to 0 \qquad (k \to \infty)$$ since $f>0$ a.s. and therefore conclude $m(E_n) \to 0$ as $n \to \infty$.
EDIT: Wrong! If $\lim_{n\to\infty} m(E_n)>0$, then(*) you can find $E_{n_k}$ such that $m(E_{n_k})>\epsilon$ and $\bigcap E_{n_k}=E$ has positive measure, say $m(E)>\epsilon/2$. Then $\int_{E_n}f(x)dm(x)\geq\int_E f(x)dm(x)>0$, so $\lim \int_{E_n}fdm\neq 0$. (*)Wlog assume that $m(E_n)>\epsilon$ and consider $F=\bigcup E_n$. We can find a finite number $k$ such that $m(F\setminus (E_1\cup\ldots\cup E_k))<\epsilon/4$. If for every $\delta>0$ there exists $h>0$ such that $m(E_j\cap E_l)<\delta$ for $j=1,\ldots, k$ and every $l>h$, then we can fix $\delta<\epsilon/4k$ and we get that $$m((E_1\cup\ldots\cup E_k)\cap E_l)\leq \epsilon/4$$ so $$m(E_l\cap(F\setminus (E_1\cup\ldots\cup E_k)))\geq 3\epsilon/4$$ but that's impossible, because $m(F\setminus (E_1\cup\ldots\cup E_k))\leq \epsilon/4$.
How to reverse digits of an integer mathematically? Is there a mathematical way to reverse digits of an integer mathematically? for example: $18539 \rightarrow 93581$
Let $n = \lfloor \log_{10}x\rfloor$, then $x$ has $n + 1$ digits. Let $x = \displaystyle\sum_{i=0}^nx_i10^i$ where $x_i \in \{0, 1, \dots, 9\}$. To obtain $x_k$ (i.e. the $(k+1)^{\text{st}}$ digit of $x$), first divide by $10^k$ to obtain $$x10^{-k} = \sum_{i=0}^nx_i10^{i-k} = \sum_{i=0}^{k-1}x_i10^{i-k} + \sum_{i=k}^nx_i10^{i-k}.$$ Note that the first sum on the right hand side is a non-negative number less than $1$ and the second sum is an integer. Therefore $$\lfloor x10^{-k}\rfloor = \left\lfloor\sum_{i=0}^nx_i10^{i-k}\right\rfloor = \sum_{i=k}^nx_i10^{i-k}.$$ Now consider the corresponding expression for $k+1$: \begin{align*} \lfloor x10^{-(k+1)}\rfloor &= \left\lfloor\sum_{i=0}^nx_i10^{i-(k+1)}\right\rfloor\\ &= \sum_{i=k+1}^nx_i10^{i-(k+1)}\\ &= 10^{-1} \sum_{i=k+1}^nx_i10^{i-k}\\ &= 10^{-1}\left(\sum_{i=k}^nx_i10^{i-k}-x_k\right)\\ &= 10^{-1}\left(\lfloor x10^{-k}\rfloor - x_k\right). \end{align*} Rearranging for $x_k$ we find $x_k = \lfloor x10^{-k}\rfloor - 10\lfloor x10^{-(k+1)}\rfloor$. Therefore, the number you are after is \begin{align*} \sum_{k=0}^nx_{n-k}10^k &= \sum_{k=0}^n\left(\lfloor x10^{-(n-k)}\rfloor - 10\lfloor x10^{-(n-k+1)}\rfloor\right)10^k\\ &= \sum_{k=0}^n\lfloor x10^{-(n-k)}\rfloor10^k - \sum_{k=0}^n\lfloor x10^{-(n-k+1)}\rfloor10^{k+1} \end{align*} or if you prefer \begin{align*} \sum_{k=0}^nx_k10^{n-k} &= \left(\lfloor x10^{-k}\rfloor - 10\lfloor x10^{-(k+1)}\rfloor\right)10^{n-k}\\ &= \sum_{k=0}^n\lfloor x10^{-k}\rfloor10^{n-k} - \sum_{k=0}^n\lfloor x10^{-(k+1)}\rfloor10^{n-k+1}. \end{align*} We can simplify this even further. Note that $\lfloor x10^{-n+1}\rfloor = 0$ so the final term in the second sum is zero. Now shift the index in the second sum so that it begins at $k = 1$: \begin{align*} &\sum_{k=0}^n\lfloor x10^{-k}\rfloor10^{n-k} - \sum_{k=0}^n\lfloor x10^{-(k+1)}\rfloor10^{n-k+1}\\ &= \sum_{k=0}^n\lfloor x10^{-k}\rfloor10^{n-k} - \sum_{k=1}^n\lfloor x10^{-k}\rfloor10^{n-k+2}\\ &= \lfloor x10^0\rfloor10^n + \sum_{k=1}^n\lfloor x10^{-k}\rfloor10^{n-k} - 100\sum_{k=1}^n\lfloor x10^{-k}\rfloor10^{n-k}\\ &= x10^n - 99\sum_{k=1}^n\lfloor x10^{-k}\rfloor10^{n-k}. \end{align*} Therefore, given a positive integer $x$, the integer which has the same digits in the reverse order is $$x10^{\lfloor \log_{10}x\rfloor} - 99\sum_{k=1}^{\lfloor \log_{10}x\rfloor}\lfloor x10^{-k}\rfloor10^{\lfloor \log_{10}x\rfloor-k}.$$ Here's an example calculation. Let $x = 123$, then $\lfloor \log_{10}x\rfloor = 2$. As $\lfloor x10^{-1}\rfloor = \lfloor 12.3\rfloor = 12$ and $\lfloor x10^{-2}\rfloor = \lfloor 1.23\rfloor = 1$, we have $$123\times 10^2 - 99\left(12\times 10 + 1\times 1\right) = 123\times 100 - 99\times 121 = 12300 - 11979 = 321$$ as expected.
let number be 123. perform 123%10 = 3 = a perform 123/10 =12 if not 0,then multiply a with 10, so a = 30 perform 12%10 = b perform 12/10 = 1 if not 0, then multiply b with 10 & a with 10,so a = 300 & b = 20 perform 1%10 = c perform 1/10 = 0 if not 0, continue....however, as this is 0, you stop the process. just add a+b+c. = 300+20+1 = 321 & in terms of pure mathematics: sum of [{floor of n/10^(N-1-k)}*{10^k] for k= 0 to N-1. where n is integer, & N is no of digits.
Test for convergence $\sum_{n=1}^{\infty} \frac{1}{2^\sqrt{n}}$ Possible Duplicate: convergence of a series involving $x^\sqrt{n}$ Test for convergence $$\sum_{n=1}^{\infty} \frac{1}{2^\sqrt{n}}$$ My first thought was to use the ratio test but it's inconclusive since it yields $1$. Are there some easy means to test the sum for convergence? Thanks!
Hint: For $n>2^{10}$, we have that $$\sqrt{n}\geq 2\log_2(n),$$ and so for $n>2^{10}$ $$2^{\sqrt{n}}\geq n^2.$$ Now try using the comparison test.
So, you could use the comparison test, that one would yield results.
Counting the number of squares on an $n\times n$ Board Yesterday I was asked by a friend how many squares are in a chess board of $8\times 8$. I thought about 64 immediately, but of course this is not the solution, there are much more. So the number of squares is: $$8\times 8 + 7\times 7 + 6\times 6 + 5\times 5 + 4\times 4 + 3\times 3 + 2\times 2 + 1\times 1=1^2 + 2^2 + 3^2 + 4^2...+ 8^2$$ I came across this formula: $$\frac{n(n + 1)(2n + 1)} 6$$ It produces the sum of squares in $n\times n$ board. My question is, how did he reached this formula? Did he just guessed patterns until he reached a match, or there is a mathematical process behind? If there is a mathematical process, can you please explain line by line? Thanks very much. Btw: Couldn't find matching tags for this question, says I can't create.
The first step is to recognize that there are $8^2$ squares of size $1$ by $1$, $7^2$ squares of size $2$ by $2$ and so on. That justifies the total number being, as you say, $1^2+2^2+3^2+\ldots 8^2$. Sums of powers are calculated by Faulhaber's formula. There are several ways to derive them. One way is to know or suspect that $\sum_{k=1}^n k^p$ should be of degree $p+1$. So for squares, we want $\sum_{k=1}^n k^2=an^3+bn^2+cn+d$. Then if we evaluate it at $n+1$, we get $\sum_{k=1}^{n+1} k^2=a(n+1)^3+b(n+1)^2+c(n+1)+d$. Subtracting, we get $(n+1)^2=a((n+1)^3-n^3)+b((n+1)^2-n^2)+c((n+1)^1-n^1)$ and equating the coefficients gets the desired formula. You can prove the formula rigorously by induction
formula to calculate the square in $m \cdot n$ order: $[n(n+1)x(3m-n+1)]/6$
Infinite oscillation of random signs Suppose that $\left(a_n\right)$ is a sequence of real numbers and that $\left(\varepsilon_n\right)$ is a sequence of IID RVs with $$P\left(\varepsilon_n = \pm 1\right) = \frac{1}{2}$$ According to Williams (Probability with Martingales (1991), Section 12.3 "Random signs", pp. 113-114), the results below show that i. $\sum \varepsilon_n a_n$ converges (a.s.) if and only if $\sum a_n^2 < \infty$, and that ii. $\sum \varepsilon_n a_n$ (a.s.) oscillates infinitely if $\sum a_n^2 = \infty$ I understand why (i) follows from the results below, but I don't see how to prove (ii). Theorem 12.2 (Sums of zero-mean independent variables in $\mathcal{L}^2$, pp. 112-113). Suppose that $\left(X_k : k \in \mathbb{N}\right)$ is a sequence of independent random variables such that, for every $k$, $$E\left(X_k\right) = 0,\ \sigma_k^2 := \mathrm{Var}\left(X_k\right) < \infty$$ a. Then $$\left(\sum \sigma_k^2 < \infty\right)\ \textrm{implies that }\left(\sum X_k\ \textrm{converges a.s.}\right)$$ b. If the variables $\left(X_k\right)$ are bounded by some constant $K$ in $\left[0,\infty\right)$ in that $\left|X_k\left(\omega\right)\right| \leq K$, $\forall k, \forall \omega$, then $$\left(\sum X_k\ \textrm{converges, a.s.}\right)\ \textrm{implies that }\left(\sum \sigma_k^2 < \infty\right)$$ Notes to the theorem The Kolmogorov $0$-$1$ law implies that $$P\left(\sum X_k\ \mathrm{converges}\right) = 0\ \mathrm{or}\ 1$$ The proof of part b given in the book shows in fact that if $\left(X_k\right)$ is a sequence of independent zero-mean RVs uniformly bounded by some constant $K$, then $$\left(P\left\{\textrm{partial sums of }\sum X_k\ \textrm{are bounded}\right\} > 0\right) \implies \left(\sum X_k\ \textrm{converges a.s.}\right)$$
The events $$A := \left\{\sum \varepsilon_k a_k\ \textrm{diverges to }+\infty\right\}$$ and $$B := \left\{\sum \varepsilon_k a_k\ \textrm{diverges to }-\infty\right\}$$ are both tail events, hence by Kolmogorov's $0$-$1$ law $P\left(A\right),P\left(B\right)\in\left\{0,1\right\}$. However by symmetry $P\left(A\right)=P\left(B\right)$ and since $A \cap B = \emptyset$, we must have $P\left(A\right) = P\left(B\right) = 0$.
Since $ P(ε_n=±1)=\frac{1}{2}, var(ε_n ) = 1$ $\Sigma var(ε_k a_k)= \Sigma a^2_k \times var( ε_k )= \Sigma a^2_k $
Infinitely differentiable of Fourier solution to heat equation Given $f (x,t) = \sum_{n=1}^{\infty} a_{n}e^{-n^2t}\phi_{n}(x)$ with $\phi_{n}(x) =\sqrt{\frac{2}{\pi}}\sin(nx)$ and $a_n = \int_{0}^{\pi} f_0(x)\phi_{n}(x)dx$ is the Fourier solution to the heat equation $f_t - f_{xx} = 0$ on $(0,\pi)$ with Dirichlet condition $f(0,t) = f(\pi, t) = 0$ and initial condition $f(x,0) = f_{0}(x)$. In addition, assume $|a_{n}|\leq M$ for all $n$. Question: Prove that $f(x,t)$ is $C^{\infty}$ in $x$ for each $t > 0$. My attempt: I was able to show that at each $t = t_0$ ($t_0$ is fixed), $f(x,t_0)$ converges uniformly in $x$, thus $f(x,t)$ is $C^0$. But when I differentiate with respect to $x$, I got $f_x = \sum_{n=1}^{\infty} a_{n}e^{-n^2t_0}\sqrt{\frac{2}{\pi}}\ n\cos(nx)$. But I can't see how to show $\sum_{n=1}^{\infty} n\cos(nx) e^{-n^2t_0}$ converges, as the extra term $n$ causes me big trouble in bounding this series by a convergent series to apply the Weirstrass M-test. The more I differentiate, the higher degree of the term $n$ I would get. Can anyone please help prove the later case (1st and 2nd derivative) to show that $f(x,t_0)$ is $C^{1}$ and $C^{2}$ in $x$?
For any fixed $t_0>0$, the exponential decay induced by $e^{-n^2 t_0}$ eats up any polynomial weight that arises from differentiation (of any order) in $x$. This can be expressed formally by an inequality such as the following: $$\lvert n^k e^{-n^2 t_0}\rvert\le C_{t_0, k}\,e^{-n^2 \frac{t_0}2}. $$ In particular, the series for $\partial^k_x f$ converges uniformly in $x$ for any $k$.
For any fixed $t_0>0$, the exponential decay induced by $e^{-n^2 t_0}$ eats up any polynomial weight that arises from differentiation (of any order) in $x$. This can be expressed formally by an inequality such as the following: $$\lvert n^k e^{-n^2 t_0}\rvert\le C_{t_0, k}\,e^{-n^2 \frac{t_0}2}. $$ In particular, the series for $\partial^k_x f$ converges uniformly in $x$ for any $k$.
Square root of a $3\times3$ matrix Here is $3\times3$ matrix$$\begin{pmatrix} 0& 0& 1\\ 0 & -1 & 0\\ 1& 0 & 0\end{pmatrix}$$ How can I solve this by using Cayley-Hamilton? I know how to use Cayley-Hamilton for a $2$-dimensional matrix. How can it help in finding the square root of a $3\times3$ matrix? for 2 dimensional matrix we can solve this equation A^2−(trA)A+(detA)I=0 we have A and I, we can compute det(A^2) so we have det A, and we can find A. for 2 dimensional matrix using above equation we can compute square root. for example we have this matrix: A^2= $$\begin{pmatrix} 4& 2\\ 2& 2\end{pmatrix}$$ det A^2= 4 det A=2 4 2 1 0 2 2 +6 *A+ 2* 0 1 =0 by solving above equation we can find A.
Since your matrix $M$ is symmetric you can diagonalize it. You'll get $M=U \cdot D \cdot U^\dagger$, where $D$ contains the eigenvalues and $U$ contains the eigenvectors. Take the square roots of the eigenvalues to get $D_{\sqrt \cdot}$ and transform the resulting diagonal matrix $U^\dagger \cdot D_{\sqrt{.}}\cdot U=S$. It's obvious to see that $S^2=M$. EDIT And even that I don't understand your original approach, you can use it when you rewrite your matrix to $$ \pmatrix{0 & 1 & 0\\1&0&0\\0&0&-1}. $$ Now you have one $2\times 2$ block suitable for your approach.
I solved it by Euler's method of solving for the (later named) unit-length eigenvector (a tilted 3D rotational axis), and the Euler 2D angle theta of rotation about the 3D Euler axis. I did it computationally. The square root matrix is: { { 0.5 , 0.7071, 0.5 }, {-0.7071,0,0.7071}, {0.5,-.7071,0.5}}
Linear Algebra - Homogenous Equations Find a homogenous system of linear equations in five unknowns whose solution space consists of all vectors $\Re^5$ that are orthogonal to the vectors: $\mathbf v_1$$=\langle3,0,1,2,3\rangle$; $\mathbf v_2$$=\langle-3,1,0,-1/2,-1\rangle$; $\mathbf v_3$$=\langle6,0,1,2,-3\rangle$. What kind of geometric object is the solution set? Find a general solution of the system and confirm that the solution space has the orthogonality and the geometric properties that were stated. I was told this problem would be really good practice for linear algebra however I have no idea how to approach this problem and would appreciate any guidance.
We are looking for all $\mathbf x=\langle x,y,z,u,v\rangle \in \Re^5$ such that $\mathbf x \cdot \mathbf v_j=0$ for $j=1,2,3$, where $ \mathbf x \cdot \mathbf v_j$ denotes the usual inner product on $\Re^5$.
A vector $(x_1, x_2, x_3, x_4, x_5) \in \mathbb{R}^5$ is orthogonal to a vector $(a_1, a_2, a_3, a_4, a_5) \in \mathbb{R}^5$ if and only if $a_1 x_1 + a_2 x_2 + a_3 x_3 + a_4 x_4 + a_5 x_5 = 0$. Here, the set of vectors you are looking for is the set of solutions of the system : \begin{equation} \left\{ \begin{aligned} 3x_1& & +x_3& +2x_4&+3x_5&=&0\\ -3x_1&+x_2& &-\frac{1}{2}x_4&-x_5&=&0\\ 6x_1& &+x_3&+2x_4&-3x_5&=&0\\ \end{aligned} \right. \end{equation} Now you get a linear system and you can resolve it with your favorite method.
General expression for determinant of a block-diagonal matrix Consider having a matrix whose structure is the following: $$ A = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3} & 0 & 0 & 0 & 0 & 0 & 0\\ a_{2,1} & a_{2,2} & a_{2,3} & 0 & 0 & 0 & 0 & 0 & 0\\ a_{3,1} & a_{3,2} & a_{3,3} & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_{4,4} & a_{4,5} & a_{4,6} & 0 & 0 & 0\\ 0 & 0 & 0 & a_{5,4} & a_{5,5} & a_{5,6} & 0 & 0 & 0\\ 0 & 0 & 0 & a_{6,4} & a_{6,5} & a_{6,6} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{7,7} & a_{7,8} & a_{7,9}\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{8,7} & a_{8,8} & a_{8,9}\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{9,7} & a_{9,8} & a_{9,9}\\ \end{pmatrix} $$ Question. What about its determinant $|A|$?. Another question I was wondering that maybe matrix $A$ can be expressed as a product of particular matrices to have such a structure... maybe using these matrices: $$ A_1 = \begin{pmatrix} a_{1,1} & a_{1,2} & a_{1,3}\\ a_{2,1} & a_{2,2} & a_{2,3}\\ a_{3,1} & a_{3,2} & a_{3,3}\\ \end{pmatrix} $$ $$ A_2 = \begin{pmatrix} a_{4,4} & a_{4,5} & a_{4,6}\\ a_{5,4} & a_{5,5} & a_{5,6}\\ a_{6,4} & a_{6,5} & a_{6,6}\\ \end{pmatrix} $$ $$ A_2 = \begin{pmatrix} a_{7,7} & a_{7,8} & a_{7,9}\\ a_{8,7} & a_{8,8} & a_{8,9}\\ a_{9,7} & a_{9,8} & a_{9,9}\\ \end{pmatrix} $$ I can arrange $A$ as a compination of those: $A = f(A_1,A_2,A_3)$ Kronecker product One possibility can be the Kronecker product: $$ A= \begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{pmatrix} \otimes A_1 + \begin{pmatrix} 0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 0\\ \end{pmatrix} \otimes A_2 + \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1\\ \end{pmatrix} \cdot A_3 $$ But what about the determinant??? There are sums in this case which is not good...
First write $$\left[ \begin{array}{cccc} A_1 \hspace{-5pt} &&& \\ & A_2 \hspace{-5pt} && \\[-3pt] && \ddots \hspace{-5pt} & \\ &&& A_k \end{array} \right] = \left[ \begin{array}{cccc} A_1 \hspace{-5pt} &&& \\ & \text{I}_{n_2} \hspace{-5pt} && \\[-3pt] && \ddots \hspace{-5pt} & \\ &&& \text{I}_{n_k} \end{array} \right] \left[ \begin{array}{cccc} \text{I}_{n_1} \hspace{-5pt} &&& \\ & A_2 \hspace{-5pt} && \\[-3pt] && \ddots \hspace{-5pt} & \\ &&& \text{I}_{n_k} \end{array} \right] \dots \left[ \begin{array}{cccc} \text{I}_{n_1} \hspace{-5pt} &&& \\ & \text{I}_{n_2} \hspace{-5pt} && \\[-3pt] && \ddots \hspace{-5pt} & \\ &&& A_k \end{array} \right] $$ Also, $$\det \left( \left[ \begin{array}{ccccc} \text{I}_{n_1} \hspace{-5pt} &&&& \\[-3pt] & \ddots \hspace{-5pt} &&& \\ && A_j \hspace{-5pt} && \\[-3pt] &&& \ddots \hspace{-5pt} & \\ &&&& \text{I}_{n_k} \end{array} \right] \right) = \det (A_j)$$ which can be seen by using the cofactor formula and repeatedly expanding along a row or column with all 0's and one 1 $$ \implies \det \left( \left[ \begin{array}{cccc} A_1 \hspace{-5pt} &&& \\ & A_2 \hspace{-5pt} && \\[-3pt] && \ddots \hspace{-5pt} & \\ &&& A_k \end{array} \right] \right) = \det (A_1) \det (A_2) \cdots \det (A_k)$$
A "functorial approach" using the exterior product: If $\phi: V \rightarrow V$ is an endomorphism of a vector space, you may calculate the determinant of the endomorphism $\phi$ as the induced map $$\wedge^n (\phi): \wedge^n V \rightarrow \wedge^n V$$ where $n:=\dim(V)$. Since $\wedge^n$ is a functor you get a canonical map $\wedge^l (\phi)$ for any integer $l \geq 1$. It follows $\wedge^n (\phi)$ is an endomorphism of a one dimensional vector space $\wedge^n V$ and hence it is given as multiplication with a number $a$. The number $a$ is the determinant: $a=\det(\phi)$ of the map $\phi$. If you choose a basis $B$ of $V$ and the matrix of $\phi$ in this basis is a matrix $A$, it follows $a=det(A)$ is the determinant of the matrix $A$. There is a formula: $$\wedge^{n_1+n_2}(V_1\oplus V_2) \cong \wedge^{n_1}V_1 \otimes \wedge^{n_2}V_2,$$ where $n_i:=\dim(V_i)$. Let \begin{align*} \phi= \begin{pmatrix} \phi_1 & 0 \\ 0 & \phi_2 \end{pmatrix} \end{align*} where $\phi_i$ is an endomorphism of $V_i$. It "follows" $$\det(\phi)=\wedge^{n_1+n_2}(\phi) \cong \wedge^{n_1}(\phi_1) \otimes \wedge^{n_2}(\phi_2)$$ But the tensor product $\wedge^{n_1}V_1 \otimes \wedge^{n_2}V_2$ is a one dimensional vector space and any linear endomorphism of such a space is given (in a basis) as multiplication with a number. Choosing a basis it follows the endomorphism $\wedge^{n_1}(\phi_1) \otimes \wedge^{n_2}(\phi_2)$ is multiplication with the number $$\det(A_1)\det(A_2),$$ where $A_i$ is a matrix of $\phi_i$ in a basis $B_i$ for $V_i$. Question: "But what about the determinant???" By induction it follows that if $M$ is a matrix with square matrices $A_i$ along the diagonal, you get the formula $$\det(M)=\det(A_1) \cdots \det(A_n).$$
Minimizing over partitions $f(\lambda) = \sum \limits_{i = 1}^N |\lambda_i|^4/(\sum \limits_{i = 1}^N |\lambda_i|^2)^2$ I'm trying to characterize the behavior of the the quantity: $$A = \frac{\sum \limits_{i = 1}^N x_i^4}{(\sum \limits_{i = 1}^N x_i^2)^2},$$ subject to the constraints that $$ \sum \limits_{i = 1}^N x_i = N, \ x_i \ge 0 \ \forall \ i$$ (I.e., we are working with all integer partitions of size $N$). My hypothesis is that if there exists $x_i \ge N/2$, then $A \ge \frac{x_i^4 + (N-x_i)^4}{(x_i^2 + (N-x_i)^2)^2}$ (which I've verified experimentally for N up to 40). But I'm having trouble proving this hypothesis. The closest I've come is to make use of the fact that $\sum \limits_{i = 1}^N x_i^2 \le N^2$, which gives an upper bound for the denominator, but the corresponding inequality for fourth powers in the numerator doesn't seem helpful, since we're trying to find a lower bound. I'd greatly appreciate any advice. Please ask for any clarifications if anything is unclear. Edit: I've decided this question is better phrased explicitly in terms of partitions. So let $$f(\lambda) = \frac{\sum \limits_{i = 1}^N |\lambda_i|^4}{(\sum \limits_{i = 1}^N |\lambda_i|^2)^2},$$ be a function over partitions $\lambda$ of size $N$ and $|\lambda_i|$ is the size of the $i$-th part. The hypothesis now is that in the domain of all partitions $\lambda'$ in which there exists $\lambda_i$ such that $|\lambda_i| \ge N/2$, $f(\lambda')$ is minimized by the 2-element partition $<\lambda_i, \lambda_j>$ (in which $|\lambda_j| = N - |\lambda_i|$).
You can assume that $x_N\ge N/2$ and so you want to prove that $$ \frac{\sum \limits_{i = 1}^N x_i^4}{\left(\sum \limits_{i = 1}^N x_i^2\right)^2}\ge \frac{x_N^4 + (N-x_N)^4}{(x_N^2 + (N-x_N)^2)^2} $$ Set $\lambda:=N-x_N$, and assume $\lambda>0$ (else the inequality is trivially true), and set also $$ x:= \frac{x_N}{\lambda},\quad\text{and}\quad y_i:=\frac{x_i}{\lambda},\quad\text{for $i=1,\dots, N-1$}. $$ The inequality now reads: $$ \frac{x^4+\sum \limits_{i = 1}^{N-1} y_i^4}{\left(x^2+\sum \limits_{i = 1}^{N-1} y_i^2\right)^2} \ge \frac{x^4 + 1}{(x^2 + 1)^2}, $$ and you want to prove this inequality under the conditions $x\ge 1$ (corresponding to $x_N\ge N/2$) and $S_1:=\sum_{i=1}^{N-1}y_i=1$, (corresponding to $\sum_{i=1}^N x_i=N$). Set $S_2:=\sum_{i=1}^{N-1}y_i^2$ and $S_4:=\sum_{i=1}^{N-1}y_i^4$. Then you want to prove that the inequality $$ (S_4+x^4)(x^2+1)^2\ge (x^4+1)(S_2+x^2)^2 $$ holds, or equivalently, that for the function $$ f(x):=S_4 x^4+2 S_4 x^2+S_4+2x^6-2S_2 x^6-2x^2S_2-S_2^2-S_2^2x^4 $$ we have $f(x)\ge 0$ for $x\ge 1$. Below we will prove the following inequality: $$ (MI) \qquad\qquad\qquad 1+2S_4\ge 2 S_2+S_2^2. $$ From this inequality it follows that $f(1)=4S_4+2-4S_2-2S_2^2\ge 0$. Hence it suffices to prove that $f'(x)\ge 0$ for all $x\ge 1$. But $$ f'(x)=4S_4 x^3+4 S_4 x+12 x^5-12 S_2 x^5 -4 S_2 x -4 S_2^2 x^3, $$ and so $f'(x)=2x g(x)$ with $$ g(x):=2 S_4 x^2+2S_4+6 x^4-6 S_2 x^4-2S_2-2S_2^2 x^2 $$ and we have to prove that $g(x)\ge 0$ for $x\ge 1$. From $(MI)$ we obtain $$ 2S_4 x^2-S_2^2 x^2\ge 2x^2 S_2-x^2\quad\text{and}\quad 2S_4-2S_2\ge S_2^2-1, $$ which implies $$ g(x)\ge 2 S_2 x^2-x^2+S_2^2-1+6x^4(1-S_2). $$ Now clearly $S_2\le 1$ and so $6x^4(1-S_2)\ge 6x^2(1-S_2)$. We arrive at $$ g(x)\ge S_2^2+(x^2-1)+4x^2(1-S_2)\ge 0, $$ which concludes the proof. Finally we prove $$ (MI) \qquad\qquad\qquad 1+2S_4\ge 2 S_2+S_2^2. $$ We will prove a slightly more general result: Take $x_1,\dots,x_n$ with $x_i\ge 0$ and set $$ S_1:=\sum_{i=1}^n x_i,\quad S_2:=\sum_{i=1}^n x_i^2\quad\text{and}\quad S_4:=\sum_{i=1}^n x_i^4. $$ Then we have $$ S_1^4+2S_4\ge 2S_2 S_1^2+S_2^2. $$ This inequality, in the case $S_1=1$, gives $(MI)$. Let's prove the inequality $ S_1^4+2S_4\ge 2S_2 S_1^2+S_2^2 $: Adding $S_2^2$ and subtracting $2S_4$ and $2 S_2 S_1^2$ it is equivalent to $$ S_1^4-2S_2 S_1^2+S_2^2\ge S_2^2-2S_4+S_2^2, $$ hence we have to prove $$ (S_1^2-S_2)^2\ge 2(S_2^2-S_4). $$ Now we compute $$ S_1^2-S_2=\left(\sum x_i\right)\left(\sum x_j\right)-\sum x_i^2=\sum_{i\ne j}x_i x_j $$ and $$ S_2^2-S_4=\left(\sum x_i^2\right)\left(\sum x_j^2\right)-\sum x_i^4=\sum_{i\ne j}x_i^2 x_j^2. $$ Subtracting $\sum_{i\ne j}x_i^2 x_j^2$ from both sides of the inequality $$ \left(\sum_{i\ne j}x_i x_j\right)^2\ge 2\sum_{i\ne j}x_i^2 x_j^2, $$ we see that the inequality is equivalent to $$ \sum_{\begin{array}{c} (i,j)\ne (k,l)\\ i\ne j\\ k\ne l\end{array}}x_i x_j x_k x_l\ge \sum_{i\ne j}x_i^2 x_j^2. $$ But we have $$ \sum_{\begin{array}{c} (i,j)\ne (k,l)\\ i\ne j\\ k\ne l\end{array}}x_i x_j x_k x_l= \sum_{\begin{array}{c} (i,j)\ne (k,l)\\ i\ne j\\ k\ne l \\ (i,j)=(l,k)\end{array}}x_i x_j x_k x_l+ \sum_{\begin{array}{c} (i,j)\ne (k,l)\\ i\ne j\\ k\ne l \\ (i,j)\ne (l,k)\end{array}}x_i x_j x_k x_l $$ hence $$ \sum_{\begin{array}{c} (i,j)\ne (k,l)\\ i\ne j\\ k\ne l\end{array}}x_i x_j x_k x_l\ge \sum_{\begin{array}{c} (i,j)\ne (k,l)\\ i\ne j\\ k\ne l \\ (i,j)=(l,k)\end{array}}x_i x_j x_k x_l=\sum_{i\ne j}x_i^2 x_j^2, $$ which concludes the proof of $(MI)$.
The trick here is to consider the following "switching operation": $$\sigma_{ij}(\lambda_1,\ldots,\lambda_N) = (\lambda_1,\ldots,\lambda_i+1,\ldots,\lambda_j-1,\ldots,\lambda_N).$$ Now the question becomes: what can we say about $f(\sigma_{ij}(\lambda))$? Assume without loss of generality that $\lambda_j\ge1$, and that all $\lambda_k\ge0$. A short computation shows: $$f(\sigma_{ij}(\lambda)) = \frac{\sum_{k=1}^N\lambda_k^4 + 4(\lambda_i^3-\lambda_j^3) + 6(\lambda_i^2+\lambda_j^2) + 4(\lambda_i-\lambda_j) + 2}{\left(\sum_{\ell=1}^N\lambda_\ell^2+2(\lambda_i - \lambda_j) + 2\right)^2}$$ Now write $\lambda_j = \lambda_i+k$ with $k$ an integer. Then the above expression becomes $$f(\sigma_{ij}(\lambda)) = \frac{\sum_{k=1}^N\lambda_k^4 + 2\lambda_i^2+k(4+2\lambda_i-3\lambda_i^2)+k^2(1-3\lambda_i^2)-k^3}{\left(\sum_{\ell=1}^N\lambda_\ell^2+2(k+1)\right)^2}$$ Now we are left with the tedious work of checking by cases ($k>0$, $k<0$ etc.) when this decreases with respect to $f(\lambda)$. This allows you to minimize the quantity.
Determine price elasticity of demand and marginal revenue Determine price elasticity of demand and marginal revenue if $q = 30-4p-p^2$, where q is quantity demanded and p is price and p=3. I solved it for first part- Price elasticity of demand = $-\frac{p}{q} \frac{dq}{dp}$ on solving above i got answer as $\frac{10}{3}$ But on solving for Marginal revenue i am getting -10. But the correct answer given is $\frac{21}{10}$ Any hint is appreciable please help.
Try this. Total Revenue TR is $pq$. Marginal revenue is the change in TR with change in $quantity$ (not price, as I incorrectly stated in my comment) so marginal revenue is $\frac{\partial TR}{\partial q}$ or $$\frac{\partial (pq)}{\partial p}\frac{\partial p}{\partial q}$$ Revenue is $pq$, or $30p−4p^2−p^3$ so marginal revenue is $30−8p−3p^2$ = 30−24−27=−21. But then, as OP correctly calcualted, $\frac{\partial q}{\partial p}=-10$ so $$\frac{\partial (pq)}{\partial p}\frac{\partial p}{\partial q}= -21/-10.$$
marginal revenue $= p(1+1/elasticity) = 3(1-3/10)= 21/10$.
Find the $E[X^3]$ of the normal distribution Find the $E[X^3]$ of the normal distribution with mean μ and variance $σ^2$ (in terms of $μ$ and $σ$). So far, I have that it is the integral of $x^3$ multiplied with the pdf of the normal distribution, but when I try to integrate it by parts, it becomes super convulated especially with the e term. I know there's tricks with odd and even functions that may apply and I have a sinking suspicion it might just end up being μ, but I've kind of hit a wall. Any help would be appreciated :) EDIT: So I think the answer is either $0$ (because of how the graph looks when I plotted it) or, by simple integration it is $3μσ^2 + μ^3$.
You can observe that it is a integral of a impar function over the all domain $\mathbb R$. Therefore it is equal to zero
Try replacing $x^2$ by $t$ and do the integration by parts.
Show that for a continuous Gaussian martingale process $M$ that $\langle M, M \rangle_t = f(t)$ is continuous, monotone, and nondecreasing Let $M$ be a (true) martingale with continuous sample paths, such that $M_0 = 0$. We assume that $(M_t)_{t \geq 0}$ is also a Gaussian process. Show that there exists a continuous monotone nondecreasing function $f: \mathbb{R}_+ \to \mathbb{R}_+$ such that $\langle M, M \rangle_t = f(t)$ for every $t \geq 0$. Recall that $\langle M, M \rangle_t$ is the quadratic variation of $M$ and that $\langle M, M \rangle_t = \lim_{n \to \infty} \sum_{i = 1}^{p_n} (M_{t_i^n} - M_{t_{i-1}^n})^2$ for a sequence of partitions with mesh going to zero. We know this is an increasing process. In a previous part of this question I showed that a consequence of the martingale property is that the increments are independent of the past. I don't know how to show that $\langle M, M \rangle_t$ is continuous.
Define $f(t) = \mathbb{E}[M_t^2]$. It remains to check that $M_t^2 - f(t)$ is a (local) martingale since then $f$ will be the quadratic variation of a continuous martingale and hence will be continuous and non-decreasing. Recall that a continuous martingale that is also a Gaussian process has independent increments. Hence \begin{align*} \mathbb{E}[M_t^2 - M_s^2 \mid \mathcal{F}_s] = \mathbb{E}[(M_t - M_s)^2 \mid \mathcal{F}_s] = \mathbb{E}[(M_t - M_s)^2] = \mathbb{E}[M_t^2 - M_s^2] \end{align*} where we used independence of increments in the third equality. In particular, it follows that $M_t^2 - \mathbb{E}[M_t^2]$ is a martingale.
I figured it out on my own. Continuous martingales are continuous local martingales, and $\langle M, M\rangle_t$ is unique up to indistinguishability, an increasing (thus monotone and nondecreasing) process, and $N_t = M_t^2 - \langle M, M \rangle_t$ is a continuous local martingale. $M_t^2$ is also continuous, so $$f(t) = \langle M, M\rangle_t = M_t^2 - N_t$$ must also be continuous.
Evaluating $\sum_{k=1}^{n} 2^{k}k$ How can I evaulate the following series? $$\sum_{k=1}^{n} 2^{k}k$$ I don't know where to begin. $2^k$ alone would be straight forward.
Notice that $\sum_{k=1}^nk2^k=\sum_{m=1}^n\sum_{k=m}^n2^k$. Can you conclude from here?
$$ \sum_{k=1}^{n} 2^{k} = infinity$$ then your sum is greater so it to goes to infinity The comparison test shows why. https://www.khanacademy.org/math/ap-calculus-bc/bc-series-new/bc-10-6/v/comparison-test-convergence
What are the complex solutions of a linear homogenous ODE of order $n$ with constant coefficients? What are the complex solutions of a linear homogenous ODE of order $n$ with constant coefficients? Where can I read a proof? p.s. I don't even see the answer to the first question with a google search.
Also you can consult the following textbook: L. Perko, "Differential Equations and Dynamical Systems," chapter 1.
The wikipedia page entitled Homogenous Differential Equations gives the method right at the start; a roof (depending on the degree of rigor required) is straightforward because the method is fairly simple.
Finding a matrix of an odd determinant I am trying to find a square matrix of size $n$ consisting of zeros and ones whose determinant is an odd number. Any ideas?
All identity matrices have determinant $1$. More generally, all permutation matrices have determinant $\pm1$, and remember that adding a multiple of any row to any other row does not change the determinant. This should already give you many, many examples. Another example (not in the instances described above) of a zero-one matrix with odd determinant is a $2n×2n$ matrix with zeros on the diagonal and ones elswehere – this has determinant $1-2n$. Finally, the most general way of constructing a zero-one matrix with odd determinant is to take any integral matrix with odd determinant and reduce modulo $2$ (odds to $1$, evens to $0$), since $\mathbb Z_2$ is a field.
Also any diagonal matrix with odd elements on the diagonal, in fact: $$\det(A)=a_{1,1} \cdots a_{n,n}$$
Number of partitions of an $n$-element set into $k$ classes A partition of a set $S$ is formed by disjoint, nonempty subsets of $S$ whose union is $S$. For example, $\{\{1,3,5\},\{2\},\{4,6\}\}$ is a partition of the set $T=\{1,2,3,4,5,6\}$ consisting of subsets $\{1,3,5\},\{2\}$ and $\{4,6\}$. However, $\{\{1,2,3,5\},\{3,4,6\}\}$ is not partition of $T$. If there are $k$ nonempty subsets in a partition, then it is called a partition into $k$ classes. Let $S_k^n$ stand for the number of different partitions of a set with $n$ elements into $k$ classes. Find $S_2^n$. Show that $S_k^{n+1}=S_{k-1}^n+kS_k^n$. -- My work: From the definition of $S$, $S_2^n=2^n$. I think I am wrong somewhere, because when I put this formula into the second part to prove, I get, $$k^{n+1}=(k-1)^n+k \cdot k^n.$$ Please tell me where I am wrong. I think this problem cannot be solved by star-and-bar method as that method finds value for $k$ but does not prove it. Please help!
on(i): $2^n$ gives you the number of all subsets of $S$, but you are looking for the number of subsets that are not empty and have no empty complement. Their total number is $2^n-2$. Note that a nonempty subset $A$ having a non-empy complement $A^c$ corresponds with partion $P=\{A,A^c\}$. However, $A^c$ corresponds with that partition too. So counting these sets gives twice the number of partitions. This amounts in: $$S_n^2=2^{n-1}-1$$ addendum on (ii) Start with a set $S$ having $n$ elements. Now form $S'=S\cup\left\{ x\right\} $ where $x\notin S$. Partitions of $S'$ in $k$ classes can be made in two ways: 1) Let $\left\{ x\right\} $ be one of the classes. If $P$ is a partition of $S$ in $k-1$ classes then $P'=P\cup\left\{ \left\{ x\right\} \right\} $ is a partition of $S'$ in $k$ classes. Here there are $S_{k-1}^{n}$ possibilities. 2) Let $\left\{ x\right\} $ be not one of the classes. For every partition of $S$ in $k$ classes we can put $x$ in one of these classes wich will induce a partition of $S'$ in $k$ classes. There are $S_{k}^{n}$ such partitions and for $x$ there are $k$ candidates so there are $kS_{k}^{n}$ possibilities.
$$S_0^{n}=1,S_1^{n}=n$$ $$S_k^{n+1}=S_{k-1}^n+kS_k^n,k>1$$ for $k=2$ we get that $$S_2^{n}=S_{1}^{n-1}+2S_2^{n-1}=n+2S_2^{n-1}$$ Solution for last recurrence is not correct because $S_2^{n}\neq2^n$
Uniformly distributed rationals Is there any algorithm, function or formula $f(n)$, which is a bijection between the positive integers and the rationals in $(0,1)$, with the condition, that for all real numbers $a,b,x$ with $0<a<b<1<x$, if we let $i(x)$ be the number of distinct integers $0<n_j<x$ which satisfy $a<f(n_j)<b$, then we have $\lim_{x\rightarrow\infty}i(x)/x=b-a$?
You are looking for an equidistributed sequence of rationals that uses every rational exactly once. The Van der Corput sequence mentioned by GEdgar comes close, except that it misses some of the rationals. But you can just throw in the missing numbers, as long as you spread them out. For instance, the binary Van der Corput sequence uses all the dyadic rationals. If you enumerate the non-dyadic rationals and insert the $k$th one into the sequence at position $2^k$, say, you can check that the new sequence is still equidistributed. The key is that of the first $N$ values in the sequence, only $o(N)$ have been changed.
(I may have misinterpreted this question. I came across this question while wondering if it is possible to construct a "uniform" random variable that generates rationals. I think it's impossible.) Consider a random variable $X$ over the rationals in (0,1). One way to define a "uniform" distribution is the following rule: $$ \mathrm{P}(a < X < a+\epsilon) = \epsilon $$ for any $0 < a < a+\epsilon <1$. This says that the probability that a number drawn from $X$ is in the interval $(a,a+\epsilon)$ equals $\epsilon$ and, in particular, the probability is independent of $a$. The probability that $X$ equals any particular rational, $r$, must be zero. A proof by contradiction: If there existed an $r$ such that $$ \mathrm{P}(X = r) = p_r > 0 $$ then one could construct an interval around $r$ which is narrower than $p_r$, such as $(r-\frac13 p_r, r+\frac13 p_r)$. By our earlier rule, it should be the case that $$ \mathrm{P}(r-\frac13 p_r < X < r+\frac13 p_r) = \frac23 p_r $$ but this contradicts the assumption that there already exists a single element in that interval which has probability $p_r$. There is not enough probability mass in the arbitrarily-small interval to allow the individual rationals to have non-zero probabilities. Therefore, the probability mass associated with any rational must be zero. (Or perhaps hyperreal.) Now, it is trivial to construct a (possibly non-uniform) distribution over the rationals, which I'll call $Y$. Draw the denominator, $d$, from a suitable integer distribution (such as Geometric) and draw the numerator, $n$, uniformly from the range $0<n<d$. For each rational $r$, there will be a non-zero probability of it being generated from that simple scheme: $$ \mathrm{P}(Y=r) > 0 $$ and of course, these must sum to 1 to be a proper probability distribution: $$ \sum_{r \in \mathcal{Q}} \mathrm{P}(Y=r) = 1 $$ Finally (and I'm not sure I've got this right...), the probabilities for every element in X (zero) are smaller than for every element in Y (non-zero). $$ \forall r \qquad \mathrm{P}(Y=r) > \mathrm{P}(X=r) $$ and therefore the sum for X, $ \sum_{r \in \mathcal{Q}} \mathrm{P}(X=r) $, can't possibly sum to 1. Every term in the sum for X is smaller than the corresponding term for Y. So such a distribution over the rationals cannot exist. (This answer probably isn't novel, I'm not a specialist. But I am interested in this question. Any feedback appreciated.)
"Negative" versus "Minus" As a math educator, do you think it is appropriate to insist that students say "negative $0.8$" and not "minus $0.8$" to denote $-0.8$? The so called "textbook answer" regarding this question reads: A number and its opposite are called additive inverses of each other because their sum is zero, the identity element for addition. Thus, the numeral $-5$ can be read "negative five," "the opposite of five," or "the additive inverse of five." This question involves two separate, but related issues; the first is discussed at an elementary level here. While the second, and more advanced, issue is discussed here. I also found this concerning use in elementary education. I recently found an excellent historical/cultural perspective on What's so baffling about negative numbers? written by a Fields medalist.
I am fully comfortable with "minus $x$," and indeed like it better than "negative $x$," and have seldom used the latter in lectures. There is no problem with the binary operator and the unary operator having the same name. Speaking and writing mathematics would be more awkward if we did not allow useful abus de langage.
In logic the negation of a certain element in a set is all the other terms in the set for the set $\{1,2,3,4\}$ the negation of the element 2 is $\neg 2=\{1,3,4\} $
Why parameterize systems of equations that contain "free variables"? As an example, let's say you have the system of equations given by $$x+y+z = 1$$ $$2x+2y+2z = 2$$ $$3x+3y+3z=3$$ As the first few steps, we recognize that solutions to the first equation are solutions to all of them, so we parameterize the first, for instance by writing $$x=r$$ $$y=s$$ $$z = 1-r-s$$ Point being to show that we can insert arbitrary values into the parameters $r, s$, and produce a $z$ which gives us a guaranteed solution to the system above. Question Why go through the step of parameterizing the "free variables"? It always seemed like a redundant step to me. I've perused at least 7 Linear Algebra books hoping that one would give a reason, but they all just tell us to do it.
Parametrizing a space with coordinates from a know space gives you a more concrete description of the solution space. It can tell you the dimension of the solution space, it can give you explicit descriptions of what the solutions look like, and it can facilitate checking whether a given point is a solution or not. In this particular case there is indeed no need to introduce new variables $r$ and $s$. You can simply parametrize the space of all solutions $V$ as $(x,y,1-x-y)$ with $x,y\in\Bbb{R}$. But introducing these new variables comes from the view that the parametrization is in fact a bijection $$f:\ \Bbb{R}^2\ \longrightarrow\ V:\ (r,s)\ \longrightarrow\ (r,s,1-r-s).$$ From this point of view the parameters $r$ and $s$ are coordinates in a new space, the parametrizing space $\Bbb{R}^2$, and hence they get new names. Now it turns out that for systems of linear equations over a field you can always choose a parametrization in terms of the coordinate functions (here in terms of $x$ and $y$). In this case these new parameter names seem redundant (and perhaps they are?). But outside the scope of linear algebra these are not at all redundant. For example, it is impossible to parametrize the rational points on the circle in terms of any of its coordinates; here it is necessary to introduce a new parameter.
The set $\{(x,y,z)\in \Bbb R^3 \; : x+y+z=1\}$ is a plane whose cartesian equation is $x+y+z=1$ and whose parametric equations are $x=x, y=y, z=1-x-y$ or $x=a, y=b, z=1-a-b$. The parametric equations give the two vectors director of the plane. in your case, the vectors are $$u=(1,0,-1) \text{ and } v=(0,1,-1)$$
Prove that $ AA^T=0\implies A = 0$ Let $A$ be an $n \times n$ matrix with real entries, where $n\geq2$. Let $AA^T = [b_{ij}] $, where $A^T $ is the transpose of $A$. If $b_{11} + b_{22 }+\cdots+ b_{nn} = 0$, show that $A = 0$. From what I've gleaned so far, $AA^T$ is a symmetric matrix, and the diagonals are zero. I can't figure out how to solve this question. Is there some property that exists that I'm missing for handling this question?
Let $$A=(a_{ij})\implies A^t=(a_{ji})\implies AA^t=(b_{ij})=\left(\sum_{k=1}^ma_{ik}a_{jk}\right)$$ so that $$0=\sum_{i=1}^nb_{ii}=\sum_{i=1}^n\sum_{k=1}^na_{ik}a_{ik}$$ Complete the proof now.
I thought about this proof, but I think it is not totally correct: Supose $A≠0$. Then, for any $X$, $A+X≠X$. Multiplying by $A^T$ at the right, we get: $(A+X)A^T≠XA^T$ $AA^T+XA^T≠XA^T$ $XA^T≠XA^T$ Arriving in a contradiction, we get $A=0$. I realise it may not be totally correct to multiply by $A^T$ without making some proper assumptions, but I know I can pick any $X$. Is there a way to formalise this proof?
If $\sum_{n=2}^{\infty} a_n$ is convergent, is $\sum_{n=2}^{\infty}{\sqrt{a_n} \over \ln{n}}\left( n^{a_n}-1 \right)$ convergent as well? Suppose $\sum_{n=2}^{\infty} a_n$ is a convergent series. Is the following series convergent as well? $$\sum_{n=2}^{\infty}{\sqrt{a_n} \over \ln{n}}\left( n^{a_n}-1 \right)$$ The part with $n^{a_n}-1 $ looks quite similar to $\sqrt[n]{n}-1$ which behaves like $\ln{n}$, however, I'm not able to tie this fact to the behavior of the whole expression. Any suggestions?
We can rewrite $n^{a_n}$ as $\exp (a_n \ln n)$, and then $$\sum_{n = 2}^{\infty} \frac{\sqrt{a_n}}{\ln n}\bigl(n^{a_n} - 1\bigr) = \sum_{n = 2}^{\infty} a_n^{3/2}\cdot \frac{\exp (a_n\ln n) - 1}{a_n\ln n}.\tag{1}$$ Now, assuming $a_n \geqslant 0$, everything is fine and dandy if $a_n\ln n$ remains bounded. Of course, if a series of nonnegative terms converges, the terms must on average decay much faster than $\frac{1}{\ln n}$, so it may look as though we can deduce convergence. But that's only on average, and for sparse enough indices, we can have arbitrarily large $a_n\ln n$. So large that the series in $(1)$ diverges. For $k \in \mathbb{N}$, let $n_k = \lceil \exp (4^k)\rceil$, and $$a_n = \begin{cases} k^{-2} &, n = n_k \\ 2^{-n} &, n \notin \{ n_k : k \in \mathbb{N}\}.\end{cases}$$ Then it's easily seen that $\sum a_n$ converges, yet we have $$(a_{n_k})^{3/2}\cdot \frac{\exp (a_{n_k}\ln n_k) - 1}{a_{n_k}\ln n_k} > k^{-3}\cdot \frac{a_{n_k} \ln n_k}{2} = \frac{1}{2} k^{-5} \ln n_k > \frac{4^k}{2k^5}$$ since $\frac{e^x-1}{x} > \frac{x}{2}$ for $x > 0$, and we see that the terms in $(1)$ are unbounded, whence $(1)$ diverges for this choice of $(a_n)$.
For large values of $n$, $$n^{a_n}-1={1+a_n{\ln\, n}}-1 =a_n{\ln\, n},$$ so the given series becomes $$\sum_{n=2}^{\infty}\frac{\sqrt{a_{n}}}{\ln\, n}(n^{a_{n}}-1)=\sum_{n=2}^{\infty}\frac{\sqrt{a_{n}}}{\ln\, n}(a_n){\ln\, n} = \sum_{n=2}^{\infty}{a_n}^\frac{3}{2},$$ which converges , if $$\lim\limits_{n\to\infty} a_{n}{\ln\, n}=0 $$
Need explanation for a combinatorics riddle (full answer provided) 11 people in a certain company have access to a safe. The company owner wants any group of six people out of the 11 to open the safe, But no five-person group can open the safe itself. To achieve this goal he decided to put more than one lock on The safe, and give each person keys only to some of the locks. How many locks he has to put on the safe and how many keys each person will have to achieve his goal (the company owner wants to reduce the number as much as possible The locks, and as much as possible reduce the number of keys each person receives)? Answer: Each subgroup of 5 people will not be able to open the safe, so each subgroup should have a lock so that the members of the group do not have a key for it. On the other hand, a key for the same lock is shared for all but 5 members of the subgroup. We achieved two goals in this: each sub-group of 5 people could not open the safe and any subset of 6 you can. so we need $\binom{11}{5}$ Locks and $\binom{10}{5}$ keys My question: Can I get more elaboration on the answer?
Let’s rephrase this as a question about sets. Let $K$ be the set of all keys (that unlock one of the locks on the safe) and $K_i$ be the set of keys held by person $i$, $1\le i\le 11$. Then we have that $K_i\subset K$. The other conditions of the problem require that $$K_u\cup K_v\cup K_w\cup K_x\cup K_y\cup K_z=K$$ for all distinct $u,v,w,x,y,z$, and $$K_v\cup K_w\cup K_x\cup K_y\cup K_z \ne K$$ for all distinct $v,w,x,y,z$. Solution: The pigeonhole principle will help us out a lot here. If, for any particular $k\in K$, at least $6$ of the $11$ people hold the key (that is, $k\in K_i$ for exactly $5$ distinct values of $i$), then it is guaranteed by the pigeonhole principle that any group of $6$ people will contain at least one person holding $k$. Conversely, if there exists a key $k\in K$ such that fewer than $6$ of the $11$ people hold that key, then it will be possible to select $5$ people none of whom hold the key, making them unable to open the safe and violating a necessary condition. Therefore, we may conclude that at least $6$ people hold every key. We may use similar reasoning to show that at most $6$ people hold every key $k\in K$. Finally, we reach the conclusion that there is a one-to-one mapping between keys $k\in K$ and five-person groups (i.e. for every five-person group, there is exactly one key $k$ that is not held by any person in that group, and is held by all $6$ people not in that group). This entails that there must be $\binom{11}{5}$ keys, as desired. However, I believe that $\binom{10}{5}$ is not the correct number of keys. Since each key is held by $6$ people as explained above, there should be $6\cdot \binom{11}{5}$ keys.
Each subgroup of 5 people will not be able to open the safe, so each subgroup should have a lock so that the members of the group do not have a key for it. $11 \choose 5$ is the number of different five person subgroups in a population of eleven employees. You need this many locks to cover every possible instance of five people being unable to open the door. The same lock can't be used to lock out two different five person subgroups, else it would lock out at least six people. On the other hand, a key for the same lock is shared for all but 5 members of the subgroup. Again, since we can't lock out a six person group, everyone not in a five person subgroup must have a key to the lock that locks out that group. Everyone gets the number of keys equal to the number of five person subgroups they’re not in. $10 \choose 5$ is the number of five person subgroups that don't include a particular person. From the ten people who arent person X, choose five of them. We achieved two goals in this: each sub-group of 5 people could not open the safe and any subset of 6 you can. This solution satisfies the stated goal. The text hints that this is the minimum solution without laying out the above logic. In any case, the answer is correct and the proposed solution is wildly impractical.
How to define the exponential function without calculus? For fun, I would like to define the complex exponential function from these two properties: $\exp(0) = 1$ $\exp(z + w) = \exp(z) \exp(w)$ From here, I would like to find a way to compute values of $\exp(z)$, or at least to compute $\exp(1)$. So far, I found only two ways: Noting that $\exp'(z) = \exp(z)$ and solving the differential equation, which leads to $\int \frac{\exp'(z)}{\exp(z)} dz = \log(\exp(z)) + C = z$. Noting that $\exp'(z) = \exp(z)$, computing its Taylor series and checking that what I get is an entire function. The first approach is simply wrong because it involves logarithms, which I have not defined yet. The second approach looks much better. I haven't tried, but I guess I can find a way to manipulate the Taylor series to obtain the limit definition of $e$ and conclude that $\exp(1) = e$, which is my aim. However, I'm struggling to find another way that does not involve differentiation or limits in general. I would be happy to find a way to say $\exp(1) = e$ without calculus. I think that the irrational nature of $e$ forces me to use limits -- am I right?
You won't be able to do derive $exp(1)=e$ from your definition, since your definition works for the exponential function for any base $b$: $b^0=1$ and $b^{x+z}=b^x\cdot b^z$ are true for any $b$!
(An echo of previous answers) If $f(x+y) = f(x)f(y) $ and $f$ is differentiable, then $f(0) = 1$ (set $y=0$), so $\lim_{x \to 0} f(x) = 1 $. Switching to standard calculus variables, $\begin{array}\\ f(x+h)-f(x) &=f(x)f(h)-f(x)\\ &=f(x)(f(h)-1)\\ &=f(x)(f(h)-f(0))\\ \text{so}\\ \dfrac{f(x+h)-f(x)}{h} &=f(x)\dfrac{f(h)-f(0)}{h}\\ \end{array} $ Letting $h \to 0$, $f'(x) = f(x)f'(0)$. $f(x)=a^x$ satisfies $f(x+y)=f(x)f(y)$. $a=e$ is special because that makes $f'(0) = 1$. In general, $(a^x)' = a^x\ln(a) $.
What are all the homomorphisms between the rings $\mathbb{Z}_{18}$ and $\mathbb{Z}_{15}$? Any homomorphism $φ$ between the rings $\mathbb{Z}_{18}$ and $\mathbb{Z}_{15}$ is completely defined by $φ(1)$. So from $$0 = φ(0) = φ(18) = φ(18 \cdot 1) = 18 \cdot φ(1) = 15 \cdot φ(1) + 3 \cdot φ(1) = 3 \cdot φ(1)$$ we get that $φ(1)$ is either $5$ or $10$. But how can I prove or disprove that these two are valid homomorphisms?
If one has a homomorphism of two rings $R, S$, and $R~$ has an identity, then the identity must be mapped to an idempotent element of $S$, because the equation $x^2=x$ is preserved under homomorphisms. Now $5$ is not an idempotent element in $\Bbb Z_{15}$, so the map generated by $1 \to 5$ is not a homomorphism. However, $10$ is an idempotent element of $\Bbb Z_{15}$. In particular, the subring $T \subset \Bbb Z_{15}$ generated by $10$ has unit $10$. Since it is annihilated by $3$, and consequently by $18$, there is a unital homomorphism $\Bbb Z_{18} \to T$ (i.e., mapping $1$ to $10$). So your second map is a legitimate homomorphism of rings (composing with the injection $T \to \Bbb Z_{15}$). Basically, the point of this answer is to check that one of your maps preserves the relations of the two rings, while the other doesn't.
I will quote Wikipedia on the definition of a ring homomorphism. More precisely, if R and S are rings, then a ring homomorphism is a function f : R → S such that1 f(a + b) = f(a) + f(b) for all a and b in R f(ab) = f(a) f(b) for all a and b in R f(1) = 1 The last requirement is being relaxed. We can then sub in our f, in this case f(a1)=5a or f(a1)=10a and see if these relations hold. Given that these are small, finite and well understood groups, the problem is easy to solve from here.
Find the maximum possible value Help me to find the maximum value of $T$ with $x, y, z \in \Bbb{R_+}$ $$T=\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}+\frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}+\frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}$$ Thanks :D
Hint: first let $$\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}=a\\ \frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}=b\\ \frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}=c$$then using HM-GM-AM-QM Inequalities you get $$\frac{\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}+\frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}+\frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}}{3} = \frac{a+b+c}{3} \le \sqrt{\frac{a^2+b^2+c^2}{3}}$$ Moreover the maximum of $\frac{a+b+c}{3}$ is obtained if and only if $a=b=c$ and $\frac{a+b+c}{3} = \sqrt{\frac{a^2+b^2+c^2}{3}}$. Finally you need to solve \begin{cases} \frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}=\frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}\\ \frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}=\frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}\\ \frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}=\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3} \end{cases} But since it's a symmetrical system, the solution is $x=y=z$, thus $$\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3} = \frac{x^3x^4x^3}{(x^4+x^4)(x^2+x^2)^3}=\frac{1}{16}$$ Finally the maximum possible value of $T$ is $$T=\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}+\frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}+\frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}=3\frac{x^3x^4x^3}{(x^4+x^4)(x^2+x^2)^3}=3\frac{1}{16}=\frac{3}{16}$$
Hint: first let $$\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}=a\\ \frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}=b\\ \frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}=c$$then using HM-GM-AM-QM Inequalities you get $$\frac{\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}+\frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}+\frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}}{3} = \frac{a+b+c}{3} \le \sqrt{\frac{a^2+b^2+c^2}{3}}$$ Moreover the maximum of $\frac{a+b+c}{3}$ is obtained if and only if $a=b=c$ and $\frac{a+b+c}{3} = \sqrt{\frac{a^2+b^2+c^2}{3}}$. Finally you need to solve \begin{cases} \frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}=\frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}\\ \frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}=\frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}\\ \frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}=\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3} \end{cases} But since it's a symmetrical system, the solution is $x=y=z$, thus $$\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3} = \frac{x^3x^4x^3}{(x^4+x^4)(x^2+x^2)^3}=\frac{1}{16}$$ Finally the maximum possible value of $T$ is $$T=\frac{x^3y^4z^3}{(x^4+y^4)(xy+z^2)^3}+\frac{y^3z^4x^3}{(y^4+z^4)(yz+x^2)^3}+\frac{z^3x^4y^3}{(z^4+x^4)(zx+y^2)^3}=3\frac{x^3x^4x^3}{(x^4+x^4)(x^2+x^2)^3}=3\frac{1}{16}=\frac{3}{16}$$
If $x≥0$ and $0≤x0$, then $x=0$? I tried to let ϵ=0.5x, but find ϵ might be zero in this way, which is contradict to for all ϵ>0. Is this statement false?
Prove it by contradiction. Suppose $x \ge 0$ but $ x \ne 0$. So, $x > 0$ Then for $\epsilon = x/ 2 \gt 0$ we have $ x \not \lt \epsilon$.
You can either take the limit of $\epsilon \to 0$ or suppose that $x \not= 0$ and try to find a contradiction.
References for Algebraic number theory I am doing algebraic number theory first time. I have done all ring theory and field theory. I am interested in algebra , so also pretty much excited about algebraic number theory. I have a month's break before my semester begins. I want to self study some beginner's text covering my syllabus so that in class I can understand it much better, and may be it will turn up a research interest for me. Can somebody please suggest me some wonderful text for beginners in algebraic number theory, I do not want any number theory book using analytical methods. Strictly algebraic. Here are my contents... Course Outline Characteristic and minimal polynomial of an element relative to a finite extension, Equivalent definitions of norm and trace, Algebraic numbers, algebraic integers and their properties. Integral bases, discriminant, Stickelberger’s theorem, Brille’s theorem, description of integral basis of quadratic, cyclotomic and special cubic fields. Ideals in the ring of algebraic integers and their norm, factorization of ideals into prime ideals, generalised Fermat’s theorem and Euler’s theorem. Dirichlet’s theorem on units, regulator of an algebraic number fields, explicit computation of fundamental units in real quadratic fields. Dedekind’s theorem for decomposion of rational primes in algebraic number fields and its application, splitting of rational primes in quadratic and cyclotomic fields. Any book that covers all these with best conceptual approach. Thanks! Anyways here are some suggested readings by my institute- Saban Alaca and Kenneth Williams, Introductory Algebraic Number Theory, Cambridge University Press (2003). M. Ram Murty and J. Esmonde Problems in Algebraic Numbers Theory, Springer-Verlag (2004). Erich Hecke, Lectures on the Theory of Algebraic Numbers, Springer-Verlag (1981). Paula Ribenboim, Algebraic Numbers, John Wiley & Sons (1972). Harry Pollard and Harold Diamond, The Theory of Algebraic Numbers, Dover Publications (2010). Which one I should go for among them, or is there some other than these all, please suggest if any? Thanks!
Serge Lang's. Algebraic Number Theory is a good text also.
Serge Lang's. Algebraic Number Theory is a good text also.
How can I find $\gcd(n^a-1,m^a-1)$? From Prove that $\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$ , we have $$\gcd(a^n-1,a^m-1)=a^{\gcd(n,m)}-1$$ for every positive integers $a,n,m$. I reversed $a$ with $n,m$, and I had this question: Find $\gcd(n^a-1,m^a-1)$ for every positive integers $a,n,m$ My attempt is to find the greatest common divisor of $n^p-1$ and $m^p-1$ for every prime $p$ such that $p|a$, but I couldn't get any further. How can I find the general formula of $\gcd(n^a-1,m^a-1)$ ? (sorry for my grammar mistake, English is my second language)
Welcome. If n and m are as following forms: $n=k b+1$ ⇒ $n≡1\mod b$⇒ $n^a≡1\mod b$ $m=t b+1$ ⇒ $m≡1 \mod b$⇒$m^a≡1\mod b$ Then the common divisor will be: $gcd(n^a-1, m^a-1)=b$ In other words if $n-1$ and $m-1$ have a common divisor like b then $n^a-1$ and $m^a-1$ have a common divisor like b. This can also seen in following relations: $n^a-1=(n-1)(1+n+n^2+ . . .+n^{a-1})$ $m^a-1=(m-1)(1+m+m^2+ . . .+m^{a-1})$ Finally the possible forms may be as follows: $gcd(n^a-1, m^a-1)= gcd(n-1, m-1)$ Example: $n=4, m=7$ ⇒ $gcd(4^a-1, 7^a-1)=3$ $gcd(n^a-1, m^a-1)= gcd(n-1, 1+m+m^2+. . .+m^{a-1})$ Example: $n=32$, $m=2$, $a=5$ ⇒ gcd(32^5-1, 2^5-1)= 31 $gcd(n^a-1, m^a-1)= gcd(m-1, 1+n+n^2+. . . + n^{a-1})$ $gcd(n^a-1, m^a-1)= gcd[(1+m+m^2+. . .+m^{a-1}),(1+n+n^2+. . . + n^{a-1})]$ Example: $n=2$, $m=4$, $a=3$ ⇒ $gcd(2^3-1, 4^3-1)=7$
Welcome. If n and m are as following forms: $n=k b+1$ ⇒ $n≡1\mod b$⇒ $n^a≡1\mod b$ $m=t b+1$ ⇒ $m≡1 \mod b$⇒$m^a≡1\mod b$ Then the common divisor will be: $gcd(n^a-1, m^a-1)=b$ In other words if $n-1$ and $m-1$ have a common divisor like b then $n^a-1$ and $m^a-1$ have a common divisor like b. This can also seen in following relations: $n^a-1=(n-1)(1+n+n^2+ . . .+n^{a-1})$ $m^a-1=(m-1)(1+m+m^2+ . . .+m^{a-1})$ Finally the possible forms may be as follows: $gcd(n^a-1, m^a-1)= gcd(n-1, m-1)$ Example: $n=4, m=7$ ⇒ $gcd(4^a-1, 7^a-1)=3$ $gcd(n^a-1, m^a-1)= gcd(n-1, 1+m+m^2+. . .+m^{a-1})$ Example: $n=32$, $m=2$, $a=5$ ⇒ gcd(32^5-1, 2^5-1)= 31 $gcd(n^a-1, m^a-1)= gcd(m-1, 1+n+n^2+. . . + n^{a-1})$ $gcd(n^a-1, m^a-1)= gcd[(1+m+m^2+. . .+m^{a-1}),(1+n+n^2+. . . + n^{a-1})]$ Example: $n=2$, $m=4$, $a=3$ ⇒ $gcd(2^3-1, 4^3-1)=7$
An example of a sequence that has no accumulation points For my homework is say to prove that you are correct and that you may quote theorems. I am unsure about what accumulation points are. thank you!!
Take $a_n = n$, and work from there.
The sequence $(-1)^n$ has not accumulation point Because the values are between -1 and 1 and it doesn't have limit.
Is there an easy way of finding the taylor series for $1/(1+x^2)$? I was trying to calculate the fourth derivative and then I just gave up.
Recall that a geometric series can be represented as a sum by $$\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots = \sum_{n=0}^{\infty}x^n \quad \quad|x| <1$$ Then we can simply manipulate our equation into that familiar format to get $$\frac{1}{1+x^2} = \frac{1}{1-(-x^2)} = \sum_{n=0}^{\infty}(-x^2)^{n} = \sum_{n=0}^{\infty}(-1)^n x^{2n}$$ Fun Alternative: Note that $$\frac{d}{dx} \arctan x = \frac{1}{1+x^2}$$ and that the Taylor Series for $\arctan x$ is $$\arctan x = \sum_{n=0}^{\infty}(-1)^n\frac{x^{2n+1}}{2n+1}$$ $$\implies \frac{d}{dx} \arctan x = \frac{d}{dx}\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n+1}}{2n+1}$$ Interchange the sum and $\frac{d}{dx}$ (differentiate term-by-term) on the RHS to get \begin{eqnarray*} \frac{1}{1+x^2} &=& \sum_{n=0}^{\infty}\frac{d}{dx}(-1)^n\frac{x^{2n+1}}{2n+1}\\ &=& (-1)^n \frac{1}{2n+1}\sum_{n=0}^{\infty} \frac{d}{dx}x^{2n+1}\\ &=& (-1)^n \frac{1}{2n+1}\sum_{n=0}^{\infty} (2n+1)x^{2n}\\ &=& \sum_{n=0}^{\infty} (-1)^n\frac{(2n+1)x^{2n}}{2n+1}\\ \end{eqnarray*} Cancel the $(2n+1)$ on the RHS to arrive at $$\frac{1}{1+x^2} = \sum_{n=0}^{\infty}(-1)^n x^{2n}$$
Yes. $1-x^2+x^4-x^6+x^8-x^{10} \cdots$
Finding the centralizers of $Q_8$ For $x\in G$, the centraliser in $G$ of $x$ is $C_G(x)=\{g\in G: gx=xg\}$. I'm trying to work out the centralisers in $Q_{8}=\langle a,b:a^4=1,b^2=a^2,ba=a^{-1}b\rangle$. So to do this I am focusing on one element $x$ at a time comparing $gx$ to $xg$ for all $g$ in the group. Elements of the group have the form $a^m$ and $a^mb$ So $C_G(1)=\{G\}$ To find $C_G(a):$ $a^ma=a^{m+1}=aa^m$ so hence $a^m$ is in the centraliser but $a^mba=a^{-1}a^mb$ so $a^mb$ are not in the centraliser, hence $C_G(a)=\{1,a^m\}$ Next to find $C_G(a^2)$: $a^ma^2=a^{m+2}=a^2a^m$ so hence $a^m$ is in the centraliser and $a^mba^2=a^{-2}a^mb=a^2a^mb$ so $a^mb$ is in the centraliser = $\{1,a^m,a^mb\}=\{G\}$ For $C_G(a^3)$ we have the same logic as $C_G(a)$ = $\{1,a^m\}$ Then finally for the elements of the form $a^kb$, nothing commutes with these? so $C_G(a^kb)=\{1\}$ So the center of the group is $\{1,a^2\}$ Is the logic of what I have done correct? Is there a better way of doing it?
First, recall that for a group $G$ and $g \in G$, the centralizer $C_{G}(g)$ is a subgroup of $G$. In the case of finite groups, this is especially helpful, as we can leverage Lagrange's Theorem. Note as well that $\langle g \rangle \leq C_{G}(g)$ (why is this true?). So a good strategy is to find individual elements that commute with $g$. Once you know $a \in C_{G}(g)$, you immediately have $\langle a \rangle \leq C_{G}(g)$ as well. In this way, you can begin narrowing down candidates based on the lattice of subgroups of $G$. Similarly, if you can find elements not in $C_{G}(g)$, you can narrow down upper bounds for $|C_{G}(g)|$. Examining the lattice of subgroups of $G$ will also be quite helpful here.
This is basically a question from text of abstract algebra by dummit and foote 4th edition. Still you have used the definition though langrange's theorem is available to you as discussed by @ ml0150
If $G$ is solvable and $H\trianglelefteq G$ then $G/H$ is solvable? I'd like to show the following result of group theory: Problem: Let $G$ be a solvable group and $H\trianglelefteq G$. Then $G/H$ is solvable. Definition: A group $G$ is said to be solvable if there is a normal series of $G$ such that the factors are abelian, that is, there exists a sequence of normal subgroups:$$G=G_0\trianglerighteq G_1\trianglerighteq \ldots \trianglerighteq G_n=\{1\},$$ such that the factors $G_{i}/G_{i+1}$ are abelians. Sketch: Let $$G=G_0\trianglerighteq G_1\trianglerighteq \ldots \trianglerighteq G_n=\{1\},$$ a normal series of $G$ such that $G_i/G_{i+1}$ are abelians. The first try would be to consider $$\frac{G}{H}=\frac{G_0}{H}\trianglerighteq \frac{G_1}{H}\trianglerighteq \ldots \trianglerighteq \frac{G_n}{H}.$$ Of course this makes no sense for we don't even have $H\subset G_i$ for some $i$. But we can get subgroups of $G$ containing $H$ considering $HG_i$. Since $H\trianglelefteq G$ we have $HG_i\leq G$ for all $i$. So it is natural to consider the sequence: $$G=HG_0\geq HG_1\geq \ldots \geq HG_n=H\geq \{1\}.$$ It is not hard to verify this is a normal series of $G$. I conjecture the normal series for $G/H$ will arise from this but I still can't see how. Can anyone help me finishing this proof?
Consider the series $$ G/H=(HG_0)/H \ge (HG_1)/H \ge \dots\ge (HG_n)/H=\{1\}. $$
Homomorphic image of a solvable group is solvable. Consider natural homomorphism from G to G/H.
Show that some of the root of the polynomial is not real. \begin{equation*} p(x)=a_nx^n+a_{n-1}x^{n-1}+\dots+a_3x^3+x^2+x+1. \end{equation*} All the coefficients are real. Show that some of the roots are not real. I don't have any idea how to do this, I only have an intuition that something is related to the coefficients of $x^2$ and $x$ and the constant 1. Kindly help me!!
Adapting the argument from this answer by Noam Elkies. If $p(x)$ has only real zeros, then so does its reciprocal $$ x^np(\frac1x)=x^n+x^{n-1}+x^{n-2}+\cdots+a_{n-1}x+a_n. $$ But by Vieta relations the sum of the roots of this polynomial is $-1$ and the sum of their pairwise products is $1$. Therefore the sum of the squares of the roots is $$(-1)^2-2\cdot1=-1<0. $$ Consequently some of the roots must be non-real.
Counterexample: If $n=0$, then $p$ has no roots.
How make summation for a series which contains arbitrary elements I am studding a research paper in winch author presented a analytical model for set traversal and different cases of time complexity. I am not understanding the one point in the model that is related to summation formulation equation n# 18. $\frac{m_1 + 1}{2}t + \frac{l_2 + 1}{2}t + .... + \frac{l_n + 1}{2}t$ $ = \frac{1}{2} \bigr(m_1 + n + \sum_{a=l_2}^{l_n} a\bigr) t$ Why $l_2 + l_3 + l_4 + ..... + l_n = \sum_{a=l_2}^{l_n} a$ why its not $\sum_{a=2}^{n} l_a$
Your doubts are reasonable and your proposal is correct. The formula (18) in the referred paper and some more show all the same kind of typos (or miscalculations). We have \begin{align*} l_2 + l_3 + l_4 + ..... + l_n\color{blue}{=\sum_{a=2}^{n} l_a} \end{align*} and not $l_2 + l_3 + l_4 + ..... + l_n = \sum_{a=l_2}^{l_n} a$, since $$\sum_{a=l_2}^{l_n} a=\sum_{l_2\leq a\leq l_n}a$$
Sloppy notation. The author intended $\sum\limits_{a \in \{l_2, l_3, .... , l_n\}} a$ which would be the same thing as $\sum_{a=2}^n l_a$. However I'd argue that $\sum\limits_{a=l_2}^{l_n} a$ is not actually wrong per se. The notation for $\sum$ is $\sum\limits_{\text{some condition}} term = $ "the sum of all the $terms$ for which that condition is true". (A practical example of this would be $\sum\limits_{a|24;a\in \mathbb N} a = $ the sum of all the divisors of $24=1 + 2 + 3 + 4+ 6 + 8 +12 + 24$.) This means the familiar $\sum\limits_{a=2}^n l_a$ actually means $\sum\limits_{a\in \{2,3,4,...n\}} l_a$ and that $_{a=2}^n$ is just shorthand for $a\in \{2.....n\}$. Except when were we ever actually TAUGHT that $_{a=2}^n$ is just shorthand for $a \in \{2.....n\}$? Well, maybe you were but I never was. I had to pick it up on the streets... Anyway, if $_{a=2}^n$ is proper shorthand for $a \in \{2,.... n\}$ then isn't it okay that $_{a=l_2}^{l_n}$ proper shorthand for $a \in \{l_2, .... l_n\}$? Meh... maybe. I admit it looks weird and we have to think but... is it any more weird than $_{a=2}^n$? (That was rhetorical.) (Anyway, welcome to learning mathematics on the street.) ====ADDENDUM==== In a comment in Marcus Scheurer's answer, I realize that $\sum\limits_{a=l_2}^{l_n} a$ could just as legitimately be interpretated as $l_2 + (l_2 + 1) + (l_2 + 3) + ..... +(l_2 + \lfloor l_n - l_2\rfloor)$. It would take a bit bit of a bean-counter mind to interpret it that way if the individual $l_i$ values weren't specifically spelled out. But that is a technical and correct interpretation. So... was the author wrong or right? Arguments can be made both ways. But it's what the author did. And the author clearly meant it to be interpreted as $\sum\limits_{i=2}^n l_i$. Those are the existential facts. What we do with them is up to us.
Do Holder continuous functions preserve rate of uniform convergence sequence? Suppose $g$ is $\alpha$-Hölder continuous and $f_n$ converges uniformly to $f$ at rate $a_n$, so that $a_n \sup_{x}|f_n(x) - f(x)| \to 0 $ as $n \to \infty$ Then does $g \circ f_n$ converges uniformly to $g\circ f$ at rate $a_n$?
The claim is false in general. To see it, take $f_n(x) = x + \frac 1n$, $f(x) = x$, and set $a_n = n^{1/2}$ (for instance), let also $g(x)= x^{1/2}$, where $x\geq 0$. We have that $g$ is Holder-$\frac 12$, but the claim in your question fails for these data. Indeed, $f_n$ converges to $f$ uniformly on $\mathbb{R}$, in particular $$ n^{1/2} \sup_{x \in [0,1]} |f_n(x) - f(x)| \to 0. $$ On the other hand, $$ n^{1/2} \sup_{x\in [0,1]} |g(f_n(x)) - g(f(x))| = n^{1/2} \sup_{x\in [0,1]} \left| \frac{ f_n(x) - f(x) }{\sqrt{f_n(x)} + \sqrt{f(x)}} \right| = \\ n^{1/2} \frac 1n \sup_{x\in [0,1]} \left| \frac{ 1 }{\sqrt{x+1/n} + \sqrt{x}} \right| \geq (\text{ take } x=0) \\ n^{-1/2} \frac{1}{\sqrt{1/n}} = 1, $$ so the convergence fails.
No. We need $a_n \sup_x|f_n(x) - f(x)|^\alpha \to 0$ From the Hölder continuity of $g$, there are non-negative constant $C$, $\alpha$ such that $ a_n|g(x) - g(y)| \le a_n C |x - y |^{\alpha} $ For $\alpha > 1$, the function $g$ is constant and the convergence is automatic. For $\alpha < 1$, we have, from the uniform convergence of $f_n$ at rate $a_n$, that for any $\epsilon > 0$, $\alpha \le 1$, $x$, there exists a $N > N_*$ such that for $n > N$ $a_nC|f_n(x) - f(x)|^\alpha < \epsilon $ Which implies that for any $\epsilon > 0$ and $x$, we have a $n$ so that $ a_n|g(f_n(x)) - g(f(x))| \le a_n C |f_n(x) - f(x) |^{\alpha} < \epsilon$ which proves the result.
Is this of any real importance to the mathematical scientific community? I'm a 31 year old engineer, and I've recently came up with a way to exactly predict the probability of the number of prime numbers between two different integers. For example using my way, the number of prime numbers between $0$ and $100$ is between $0$ and $50$. And it turns out that it is correct, since there are $25$ primes between $0$ and $100$. But is this of any real importance that would lead me to publish a paper? Also my way is purely elementary and so I suspect that mathematicians would even bother to give it a look.
The method in question is noting that no primes other than 2 and 5 are divisible by 2 or 5, and thus any ten numbers $10n,10n+1,\ldots10n+9$ (really, any ten consecutive numbers) contain at most 4 primes, assuming the first term is larger than 5. This can be improved by removing multiples of 3, so that no 30 consecutive numbers can have more than 8 primes (as long as the smallest is more than 5). It's an improvement since the basic method would give only a bound of 12 for such an interval. Of course this can be increased by throwing in new primes like 7. Indeed, with more subtlety you can prove things about intervals smaller than the least common multiple of the primes used, and this has been done for intervals of lengths up to perhaps 3000. For larger intervals analytic methods have been used which restrict the number of primes which can appear. This is not a new result, but the family of results is important! In particular, they were an ingredient in improving Zhang's theorem, a major step toward the twin prime conjecture. So what you have is just a piece of the puzzle, but it's a very large, intricate puzzle. Be glad you have seen part of it!
Yes, probabilistic primality testing and related gadgets are quite common. They assist encryption, for example, by providing candidate primes. However, it's hard to say whether your particular way of doing it is worthy of being published, especially without seeing it. This is quite an extensively studied area.
If $A^2+A=0$,then $\lambda=1$ cannot be an eigenvalue of A. Prove the following statement: If $A^2+A=0$,then $\lambda=1$ cannot be an eigenvalue of A. I've been struggling on this question for a couple of hours and don't know how to approach it.
Let $\lambda$ be an eigenvalue of $A$ with eigenvector $v$. What is $(A^2+A)v$?
I believe this may be a solution as well... $A^2+A=0$ $A(A+I_n)=0$ $\therefore$ $A=[0]$ or $A=-I_n$ Since $A=0$ and $A=-I_n$ are both diagonal matrices, the possible eigenvalues for $A$ are simply the entries of the diagonal. $\therefore$ the possible eigenvalues of $A$ are: $\lambda=0$, and $\lambda=-1$ Therefore, it is not possible for $\lambda=1$ to be an eigenvalue of $A$
Two dependent random variables with standard normal distribution and zero covariance I need to find two dependent random variables with standard normal distribution, but with zero covariance. It is easy too find just two dependent random variables with such a distribution (X and -X, for example), but how I can reach zero covariance? Thanks in advance.
Try $X \sim N(0,1)$ and $Y=X$ when $|X|\lt k$ and $Y=-X$ when $|X|\ge k$ for some non-negative $k$. Then $X$ and $Y$ have standard normal distributions and $cov(X,Y)$ is a continuous increasing function of $k$, negative when $k$ is close to $0$ and positive when $k$ is large. So for some $k$ you will have $cov(X,Y)=0$
Although I haven't worked through all aspects that must be checked, I believe that if you specify $$[X \mid Y] = Y$$ i.e. that $X$ becomes $Y$ when conditioning on $Y$, you obtain $$\text {Cov}(X,Y) = E(XY) = E[E(XY\mid Y)] = E[YE(X\mid Y)]= E[YE(Y)] =0$$ Note that $$E(X) = E[E(X\mid Y)] = E[Y] = 0$$ and by the decomposition-of-variance formula $$\text {Var}(X) = \text {Var}(E[X\mid Y]) + E[\text {Var}(X\mid Y)]= \text {Var}(E[Y])+E[\text {Var}(Y)] = 0 + 1 = 1$$ i.e. this conditional specification is consistent with the unconditional moments. So these variables are uncorrelated but not independent.
Volume of a parallelpiped from its sides and diagonals? If a parallelogram has sides of length $a$ and $b$, and diagonals of length $d$ and $e$, then we can find its area in the following way. By the polarization identity, we have $a b \cos\theta = \frac{1}{4}|d^2-e^2|$. Now the area is $a b \sin \theta$, which is equal to $\sqrt{a^2 b^2 - (a b \cos \theta)^2}$, or $$\sqrt{a^2 b^2 - \frac{1}{16}(d^2-e^2)^2}.$$ This is not an exceptionally pretty formula, but maybe it's the best one can do. How can such a formula be generalized to higher dimensions? In particular, what is the volume of a parallelpiped, given its edge and diagonal lengths?
Let the parallelepiped with a vertex $O$ at the origin have vertices $A$, $B$, $C$. Write $D = B+C$; $E= C+A$; $F=A+B$; and $G=A+B+C$. Further, assign these names to the lengths of edges and diagonals: $$a := |OA|\quad b := |OB| \quad c := |OC| \quad d := |AD| \quad e := |BE| \quad f := |CF| \quad g := |OG|$$ Note that a parallelepiped (and it properties) is completely determined by six quantities: the lengths of three edges meeting at a vertex, and the measures of the three angles between those edges. The total number of edge-lengths and diagonal-lengths is seven, so there's a dependency among these values, namely $$4 \;(\; a^2 + b^2 + c^2\;) = d^2 + e^2 + f^2 + g^2$$ Although one diagonal (say, $g$) is unnecessary, it helps to simplify the volume formula a bit: $$\begin{align} 32 V^2 &= 32 a^2 b^2 c^2 + d^2 e^2 f^2 + g^2 ( d^2 e^2 + e^2 f^2 + f^2 d^2 )\\ &- 2 g^2 ( a^2 d^2 + b^2 e^2 + c^2 f^2 ) - 2 ( a^2 e^2 f^2 + d^2 b^2 f^2 + d^2 e^2 c^2 ) \end{align}$$ In the case of a rectangular parallelepiped, for which $d^2=e^2=f^2=g^2=a^2+b^2+c^2$, the formula reduces to $V = a b c$, as expected. I derived the formulas using coordinates $$A = (a_x,0,0) \qquad B = (b_x, b_y, 0) \qquad C = (c_x, c_y, c_z)$$ Expressing the various distances in terms of these $$a^2 = A\cdot A = a_x^2 \qquad b^2 = B\cdot B = b_x^2 + b_y^2 \qquad c^2 = C\cdot C = c_x^2 + c_y^2 + c_z^2$$ $$d^2 = \overrightarrow{AD}\cdot \overrightarrow{AD}=\dots \quad e^2 = \overrightarrow{BE}\cdot \overrightarrow{BE} =\dots \quad f^2 = \overrightarrow{CF}\cdot \overrightarrow{CF}=\dots$$ along with $$V = a_x b_y c_z$$ I systematically eliminated $a_x$, $b_x$, $b_y$, $c_x$, $c_y$, $c_z$ from the system with the help of Mathematica's Resultant[] function. (Appropriate applications of the Law of Cosines should work just as well, but the Resultant process lets me crank through the equations without having to think too hard. :) An $n$-dimensional parallelotope is determined by $\frac{1}{2}n(n+1)$ values. (These are the triangular numbers. You can see their relevance by noting the coordinatization done above considers vectors using $1$, $2$, $3$, ..., $n$ non-zero coordinates.) The figure has n characteristic edge-lengths and $2^{n-1}$ diagonals, so expressing volume in terms of these is certainly possible. With an increasing number of "extra" lengths, the resulting formula can take a variety of forms.
Let the parallelpiped be defined by the vectors a, b, c. Assume that its height is h wrt the base formed by a, b with θ as the angle between a and b. We further let n be unit normal vector wrt the base plane. Notation:- * = common product (multiplication); x = cross product; . = scalar product (a x b) . c = (a * b * sin θ n) . c = (a*b*sin θ)* (n . c) = (a*b*sin θ)* h = volume of the parallelpiped
What is the operator "capital D" and how can the chain rule be used in this way I ought to know this but I was somehow always able to avoid mixing partials and $d$s My book notes the following: (with some intermediate steps which are less important than the time taken to write) Here $f$ is a function of $x(t),y(t),z(t),t$ and $u=\frac{dx}{dt}$ and "" for others. $\frac{Df}{Dt}=\frac{\partial f}{\partial t}+(\mathbf{u}\cdot\nabla)f$ I was taught to use $D$ for total derivative but there's no malformed interpretation I could apply here. What is going on and if you were to look it up in an index of a book what would the book be on, and what would you look up? (E.G: Real analysis, partial differentiation) Also I'm not sure how to use bold with LaTeX nor get a special dot. If someone could tell me in the comments I'd be very grateful.
This is just the chain rule. It may help to think of this as follows: We have $f = f(x(t),y(t),z(t),w(t))$ where it so happens that $w(t) = t$. We then have $$ \newcommand{\pwrt}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\dwrt}[2]{\frac{d #1}{d #2}} D_tf = \pwrt fw \dwrt wt + \pwrt fx \dwrt xt + \pwrt fy \dwrt yt + \pwrt fz \dwrt zt =\\ \pwrt fw + (\mathbf u \cdot \nabla)f $$ you may consider rewriting $\pwrt fw$ as $\pwrt ft$ a slight abuse of notation.
It's just the chain rule. To be more precise, write $x=X(t)$, $y=Y(t)$, $z=Z(t)$ in order to distinguish the quantities $x$, $y$, $z$ from the functions $X$, $Y$, $Z$ which say how they depend on $t$. Now take your function $f(x,y,z,t)$ and define $$ g(t) = f(X(t),Y(t),Z(t),t) . $$ Then the notation $Df/Dt$ (by definition) means exactly $g'(t)$, which can be computed using the chain rule: $$ \begin{align} g'(t) &= f'_x(X(t),Y(t),Z(t),t) \, X'(t) \\ &+ f'_y(X(t),Y(t),Z(t),t) \, Y'(t) \\ &+ f'_z(X(t),Y(t),Z(t),t) \, Z'(t) \\ &+ f'_t(X(t),Y(t),Z(t),t), \end{align} $$ which equals the given expression since $X'(t)=u$, etc.
A problem on interchange of limit and integration Suppose $\lim_{h\to 0}f_n(h) = 0$ such that $g(h)=\sum_{n=1}^{\infty}f_n(h)$ converges for any $h$. Can we tell that $\lim_{h\to0}g(h) = 0$ ? i.e can we change the order of limit and sum here ? If not what is needed to make it happen ? I don't know how to use DCT/BCT here. Is it true if $f_n(h) = \int_{E_n} |f(x+h) -f(x)| dx$ where $E_n$ is an interval of length $[nh,(n+1)h]$ where $f$ is bounded and integrable.
No: Consider $f_n(h)=h^{n-1}$ for $h\in(-1,1),n\in\mathbb N$. Then $f_n(h)\to0$ for $h\to0$ and any $n$, but $$ g(h)=\sum_{n\in\mathbb N}f_n(h)=\sum_{n\in\mathbb N}h^{n-1}=\frac{1}{1-h}, $$ which does not tend to 0 as $h\to0$. To interchange limits you need that the series converges uniformly EDIT: This example doesnt work (see comment). But how about this: Since $\mathbb Q\backslash\{0\}$ is countable, there is a bijective mapping $a:\mathbb N\to\mathbb Q\backslash\{0\}$. Now define $f_n(h)=0$ if $h\neq a(n)$ and $f_n(h)=1$ if $h=a(n)$. Then each $f_n$ is continuous at $0$, since $f_n$ is 0 everywhere except at $a(n)\neq0$. Since $a$ is bijective we have that $g(h)=1$ if $h\in\mathbb Q\backslash\{0\}$ and $g(h)=0$ otherwise. In particular, $g(1/n)=1$ for any $n\in\mathbb N$.
$$\lim_{h\to0}g(h) = \lim_{h \to 0} \sum_{n=1}^{\infty} f_n(h)=\sum_{n=1}^{\infty} \left({\lim_{h \to 0} f_n(h)}\right) = 0 $$ as $$\sum_{n=1}^{\infty} \left({\lim_{h \to 0} f_n(h)}\right) = \sum_{n=1}^{\infty} 0 = 0 $$
Solve differential equations containing $x''$ Solve the equation $$x'' + 3x' = 20e^{2t}$$ if $x(0) = 0$ and $x'(0) = 1$. I am pretty sure $x$ must contain $e^{2t}$, but other than that, I'm not really sure how to proceed from here. Any help would really be appreciated!
Hints: Put $y=x'$. The homogeneous equation becomes $y'+3y=0$ whose solution is $y=ae^{-3t}$. Integrate to get $x$. A particular solution of the type $ce^{2t}$ exists. Add the particular solution to the general solution of the homogeneous equation and the apply the intial conditions. Answer: $e^{-3t}-3+2e^{2t}$.
For your work: The solution is given by $$x(t)=-\frac{1}{3} c_1 e^{-3 t}+c_2+2 e^{2 t}$$
Is there an identity to combine a sum of more than two sines; eg $\sin(a)+\sin(b)+\sin(c)+\sin(d)$? I get these trigonometric product to sum formulas like: $$\sin(a)+\sin(b)=2\sin\frac12(a+b)\cos\frac12(a-b)$$ And that's useful, but I'm not too sure what to do if I need to turn a product into a sum if there's more than two variables. What would I do with something like this? $$\sin(a)+\sin(b)+\sin(c)+\sin(d)$$
The following formula works: $$ \begin{align} \sin(a)+sin(b)+sin(c)+sin(d) = 4*\sin\left(\frac{a+b+c+d}{4}\right)*\cos\left(\frac{a-b+c-d}{4}\right) \\ *\cos\left(\frac{a+b-c-d}{4}\right)*\cos\left(\frac{a-b-c+d}{4}\right)- \\ 4*\cos\left(\frac{a+b+c+d}{4}\right)*\sin\left(\frac{a-b+c-d}{4}\right) \\ *\sin\left(\frac{a+b-c-d}{4}\right)*\sin\left(\frac{a-b-c+d}{4}\right) \\ \end{align} $$ I wrote this Python script in Google colab to "prove" it: import numpy as np def C(x): return np.cos(180/np.pi*x) def S(x): return np.sin(180/np.pi*x) a = 37 b = 6 c = 88 d = 7 x = S(a) + S(b) + S(c) + S(d) y1 = 4*S((a+b+c+d)/4.0)*C((a-b+c-d)/4.0)*C((a+b-c-d)/4.0)*C((a-b-c+d)/4.0) y2 = 4*C((a+b+c+d)/4.0)*S((a-b+c-d)/4.0)*S((a+b-c-d)/4.0)*S((a-b-c+d)/4.0) print(x) print(y1-y2) When the input is $(a,b,c,d) = (37,6,88,7)$, with angles measured in degrees, the outputs for $x$ and $(y1-y2)$ agree to $11$ decimal places at $-1.02707706592$. I also tried $(a,b,c,d) = (10,27,18,68)$, and the two sides of the equation again agreed to $11$ decimal places. That isn't a mathematical proof, but the probability of my proposed identity holding true by accident for randomly selected angles is essentially nil. I tried making one of the angles obtuse, and it still worked. Steps in the derivation: Use angle sum formulas to derive an expression for $\sin\left(\frac{A+B+C+D}{4}\right)$ in terms of $\sin\left(\frac{A}{4}\right)$, $\sin\left(\frac{B}{4}\right)$, $\sin\left(\frac{C}{4}\right)$, $\sin\left(\frac{D}{4}\right)$, $\cos\left(\frac{A}{4}\right)$, $\cos\left(\frac{B}{4}\right)$, $\cos\left(\frac{C}{4}\right)$ and $\cos\left(\frac{D}{4}\right)$. Use the first expression, and the fact that sine and cosine are odd and even functions, respectively, to derive expressions for $\sin\left(\frac{A-B+C-D}{4}\right)$, $\sin\left(\frac{A+B-C-D}{4}\right)$ and $\sin\left(\frac{A-B-C+D}{4}\right)$. Add the four expressions together. Make the following substitutions: $$ A = a+b+c+d \\ B = a-b+c-d \\ C = a+b-c-d \\ D = a-b-c+d $$ ...and you should get the formula.
The following formula works: $$ \begin{align} \sin(a)+sin(b)+sin(c)+sin(d) = 4*\sin\left(\frac{a+b+c+d}{4}\right)*\cos\left(\frac{a-b+c-d}{4}\right) \\ *\cos\left(\frac{a+b-c-d}{4}\right)*\cos\left(\frac{a-b-c+d}{4}\right)- \\ 4*\cos\left(\frac{a+b+c+d}{4}\right)*\sin\left(\frac{a-b+c-d}{4}\right) \\ *\sin\left(\frac{a+b-c-d}{4}\right)*\sin\left(\frac{a-b-c+d}{4}\right) \\ \end{align} $$ I wrote this Python script in Google colab to "prove" it: import numpy as np def C(x): return np.cos(180/np.pi*x) def S(x): return np.sin(180/np.pi*x) a = 37 b = 6 c = 88 d = 7 x = S(a) + S(b) + S(c) + S(d) y1 = 4*S((a+b+c+d)/4.0)*C((a-b+c-d)/4.0)*C((a+b-c-d)/4.0)*C((a-b-c+d)/4.0) y2 = 4*C((a+b+c+d)/4.0)*S((a-b+c-d)/4.0)*S((a+b-c-d)/4.0)*S((a-b-c+d)/4.0) print(x) print(y1-y2) When the input is $(a,b,c,d) = (37,6,88,7)$, with angles measured in degrees, the outputs for $x$ and $(y1-y2)$ agree to $11$ decimal places at $-1.02707706592$. I also tried $(a,b,c,d) = (10,27,18,68)$, and the two sides of the equation again agreed to $11$ decimal places. That isn't a mathematical proof, but the probability of my proposed identity holding true by accident for randomly selected angles is essentially nil. I tried making one of the angles obtuse, and it still worked. Steps in the derivation: Use angle sum formulas to derive an expression for $\sin\left(\frac{A+B+C+D}{4}\right)$ in terms of $\sin\left(\frac{A}{4}\right)$, $\sin\left(\frac{B}{4}\right)$, $\sin\left(\frac{C}{4}\right)$, $\sin\left(\frac{D}{4}\right)$, $\cos\left(\frac{A}{4}\right)$, $\cos\left(\frac{B}{4}\right)$, $\cos\left(\frac{C}{4}\right)$ and $\cos\left(\frac{D}{4}\right)$. Use the first expression, and the fact that sine and cosine are odd and even functions, respectively, to derive expressions for $\sin\left(\frac{A-B+C-D}{4}\right)$, $\sin\left(\frac{A+B-C-D}{4}\right)$ and $\sin\left(\frac{A-B-C+D}{4}\right)$. Add the four expressions together. Make the following substitutions: $$ A = a+b+c+d \\ B = a-b+c-d \\ C = a+b-c-d \\ D = a-b-c+d $$ ...and you should get the formula.
Solving Matrix question AX=B to find the possible solutions of B Question Find $b_1$ and $b_2$ so that the equation $Ax = b$ has solutions where $$A = \begin{bmatrix} 1 & 2 \\ 0 & 1\\ -1 & 2 \end{bmatrix};\quad b = \begin{bmatrix} b_1 \\ b_2 \\ 0 \end{bmatrix}.$$ Can this equation have a unique solution? Why or why not? Work $$\begin{bmatrix}1&2 \\0&1\\-1&2\end{bmatrix}\begin{bmatrix}x_1\\x_2\end{bmatrix} = \begin{bmatrix}b_1\\b_2\\0\end{bmatrix}$$ $$\begin{bmatrix} x_1 + 2x_2 \\ x_2 \\ -x_1 + 2x_2 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ 0 \end{bmatrix}$$ $$\left.\begin{array}{ll} x_1 + 2x_2 = b_1 \\ x_2 = b_2 \\ 2x_2 = x_1 \end{array}\right\} \Rightarrow \left.\begin{array}{ll}x_1 = b_1/2 \\ x_2 = b_1/4 \end{array}\right\} \Rightarrow b_1/4 = b_2 \Rightarrow b_1 = 4b_2$$
Hint: the equation $Ax=b$ has solutions $ \iff rank(A|b)=rank(A)$.
Hint: the equation $Ax=b$ has solutions $ \iff rank(A|b)=rank(A)$.
What is the initial algebra of the identity functor? I am trying to solve some of the exercises in Bird's Algebra of Programming. He defines an initial algebra as the initial object in the category of some F-algebra. In the book, he says that data Nat = Zero | Succ Nat is the initial algebra of the Maybe Functor, i.e, the functor F, where $FA = 1 + A$ and $Ff = id_1 + f$. Then he asks the following question as an exercise: What is the initial algebra of the Identity Functor? I have two guesses for this: My first guess, going by the pattern indicated for Nat, would be some bizzare object like this: data X = Tag X. But in this case, I cannot figure out what the homomorphism from X to an arbitrary A be. My second guess is that the initial object is some kind of tagged union of all the types and the homomorphism is an appropriate projection. Any help is appreciated
NOTE: copy-paste the sections in this answer to play with inside ghci Note that the initial object can be represented in Haskell as as an inhabitant of the type Mu (as shown below). {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE EmptyCase #-} import Prelude hiding (Maybe, Just, Nothing) -- | Mu F = initial object with respect to functor F data Mu f = Mu (f (Mu f)) I now exhibit the ismorphism between Nat and Mu Maybe below: -- | Maybe data Maybe a = Just a | Nothing deriving(Show) -- | Naturals data Nat = Zero | Succ Nat deriving(Show) -- | Bijection between naturals and initial object of Maybe functor natfwd :: Nat -> Mu (Maybe) natfwd Zero = Mu Nothing natfwd (Succ n) = Mu (Just (natfwd n)) natbwd :: Mu (Maybe) -> Nat natbwd (Mu Nothing) = Zero natbwd (Mu (Just mu)) = Succ (natbwd mu) Next, I show that the initial object generated by Id is isomorphic to Void --- that is, it is uninhabited. Intuitively, the initial object contains "arbitrary many nestings" of the original functor F. In the Maybe case, we had the ability to use Nothing to terminate this nesting. However, in the case of Id, this is no longer the case: there is no "branch" in the type Id to break this arbitrary recursion. Hence, it cannot be inhabited by any inductive value. Void is a way to declare values in haskell that cannot be inhabited, in case you have not seen it before. Void has no data constructors. data Void data Id a = Id a isoFwd :: Mu Id -> Void isoFwd (Mu (Id x)) = isoFwd x isoBwd :: Void -> Mu Id isoBwd v = case v of Claim: isoFwd : Mu Id -> Void proves that Mu Id is not inhabited Assume for contradiction that Mu Id is inhabited. Hence, there exists an x : Mu Id. Now, compute y = (isoFwd x) : Void. But we know that Void is uninhabited. Hence, contradiction. Therefore, Mu Id must be uninhabited. Claim: isoFwd, isoBwd define an isomorphism between Void and Mu Id To discuss equality of functions, we define: \begin{align*} \forall f, g : X \rightarrow Y, \quad f ~\texttt{==}~g \equiv \forall x \in X, f (x) = g(x) \end{align*} Note that if $X$ is uninhabited, then the relation ${f~\texttt{==}~g}$ is vacuously true. Armed with this, we show that isoFwd. isoBwd = id = isoBwd . isoFwd. isoFwd . isoBwd :: Void -> Void == id :: Void -> Void since Void is uninhabited, and hence the two LHS and RHS are vacuously equal. isoBwd . isoFwd :: Mu Id -> Mu Id == id :: Mu Id -> Mu Id since Mu Id is uninhabited, and hence the LHS and RHS are vacuously equal. Further reading For more of this kind of computational perspective, I can recommend reading about recursion schemes and cata/anamorphisms, from Bananas, Lenses, and Barbed Wire as well as the recursion-schemes library.
The initial algebra of the identity functor is a set $X$ together with a map $X\to X$ that is initial among all such maps : for any set $Y$ and map $Y\to Y$, there is a unique map $X\to Y$ making $$\require{AMScd}\begin{CD} X @>>> X \\ @VVV @VVV \\ Y@>>> Y\end{CD}$$commute. In particular, for any $Y$ there is a unique map $X\to Y$ (taking $id_Y :Y\to Y$), so $X$ is initial. Conversely, if $X$ is initial...
Parametrization of $x^2-y^2=1600$ While trying to compute the line integral along a path K on a function, I need to parametrize my path K in terms of a single variable, let's say this single variable will be $t$. My path is defined by the following ensemble: $$K=\{(x,y)\in(0,\infty)\times[-42,42]|x^2-y^2=1600\}$$ I know how to calculate the line integral, that is not my issue. My problem is to parametrize $x^2-y^2=1600$. I tried using the identities: $$\sin^2(t)+\cos^2(t)=1$$ $$\sec^2(t)-\tan^2(t)=1$$ But I did not get anywhere with my parametrization (see below for my poor try into parametrizing). I would welcome any help/hints and if you happen to know some good reading to learn more about parametrization, I am also interested. $$r(t)=1600\sec^2(t)-1600\tan^2(t)=1600$$ for $$x=40\sec(t) \land y=40\tan(t)$$
I think that using trigonometric function is overcomplicating it in this case. You can let $y$ correspond to a parameter $t$, then, since $x$ is given to be positive, we can say that $x$ is the following positive root $$x = \sqrt{1600 + t^2}.$$ Your parameterised curve is subsequently given by: $$\left\{\left(\sqrt{1600 + t^2},t\right): t \in [-42,42]\right\}.$$ Letting $y = 40\sinh(t)$ is also an option, in which case the parameterisation is given by $$\left(40 \cosh(t), 40 \sinh(t) \right).$$ This perhaps looks more appealing although finding the correct bounds on $t$ now involves inverse hyperbolic functions, which I will leave up to you if you are willing to do it.
$$x^2-y^2=1600$$ $$(\frac{x}{40})^2 - (\frac{y}{40})^2=1$$ Let $x=40\cdot(e^t+e^-t)/2=40\cdot\cosh{t}$. Let $y=40\cdot (e^t-e^{-t})/2=40 \cdot\sinh{t}$ Use positive values of t if you need positive values of $x$. $d/dt(\cosh{t})=\sinh{t}$ $d/dt(\sinh{t})=\cosh{t}$
Dedekind cuts for $\pi$ and $e$ I tried to search in the internet about this but did not get any exciting answers. So my question is: How is construction of transcendental numbers like $\pi$ and $e$ explained via Dedekind cuts?
Here is a simple description for $e$. The left set consists of all rationals $r$ such that $$r\lt 1+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+\cdots +\frac{1}{n!}$$ for some $n$. This description is close in spirit to one of the many definitions of $e$. One can give a similar description for $\pi$, though there is nothing as natural. We could use the following variant of the "Leibniz" series, using for the left set all rationals $r$ such that $$r\lt 4-\frac{4}{3}+\frac{4}{5}-\frac{4}{7}+ \cdots +\frac{4}{4n+1}-\frac{4}{4n+3}$$ for some $n$. Note that we stop with a "$-$" because we want to make sure we are below $\pi$.
The real numbers are really troublesome due to the infinities that enter the whole game. It is possible to show that $\sqrt2$ is a cut, because you can easily go to a rational by squaring. With transcendental numbers this is much more difficult, and actually impossible. You simply have to believe that transcendentals are good numbers. It is a choice so to say. What is done above actually, is that $e$ and $\pi$ are represented by some finite rational interval that can be chosen arbitrary small. But rational intervals do not obey the field rules, since multiplication is not distributive over addition in an unambiguous way no matter how small you choose them. Also the interval that contains zero causes serious problems in elaborate calculations. Calculations based on intervals always can result in an interval that contains zero. Personally I would rather consider real values instead of real numbers, since there is no clarity whether real values are good numbers in general. It is still debated among mathematicians whether the reals are good numbers.
Difference between $\log n$ and $\log^2 n$ I'm researching the different execution time of various sorting algorithms and I've come across two with similar times, but I'm not sure if they are the same. Is there a difference between $\log n$ and $\log^2 n$? EDIT: Follow up question: in terms of complexity , which would be faster, $O(\log n)$ or $O(\log^2 n)$? My guess would be the first one. (Note, this is not homework, I'm just trying to understand the difference between quicksort and bitonic sort on a hypercube topology. )
$(\log(n))^2$ means $\log^2(n)$
O(log^2 N) is faster than O(log N) because of O(log^2 N) = O(log N)^2 = O(log N * log N) Therefore Complexity of O(log^2 N) > O(log N). Just take n as 2, 4, 16; O(log^2 N) O(log N) 2 --> 1^2 = 1 1 4 --> 2^2 = 4 2 16 --> 4^2 = 16 4
One-one correspondence between maximal ideals and maximal elements. Let $S$ be a multiplicatively closed set, i.e. if $a\in S$ and $b\in S$, we must have $ab\in S$. Then $R_{S}=\left\{\dfrac{r}{s}|r\in R, s\in S\right\}$, where $R$ is a commutative ring containing $S$. $R_{S}$ can be thought of as a subring of quotient field of $R$. Now we want to show that there is a one-one correspondence between totality of maximal ideals of $R_{S}$ and maximal elements $A=\{\alpha|\alpha$ is an ideal of R and $\alpha\cap S=\emptyset\}$ with respect to set inclusion. First of all, we can establish a homomorphism $\phi : R\rightarrow R_{S}$ where $\phi(r)=\dfrac{rs}{s}$. Now I for every proper ideal $I$ in $R_{S}$, we have $1\notin I$. So $I$ can't contain any element like $\dfrac{rs}{s}$ where $r\in S$. $\phi^{-1}(I)$ is an element in $A$. Now we have to prove that if $I$ is maximal ideal, then $\phi^{-1}(I)$ is a maximal element. Suppose $\phi^{-1}(I)$ is not a maximal element, we have $\phi^{-1}(I)\subset L$ where $L$ is another element in $A$. Now $<\phi(L)>$ is an ideal in $R_{S}$ generated by $\phi(L)$. I don't know how to show that this is a proper ideal. Conversely, for every maximal element $L$ in $R$, we have to show $<\phi(L)>$ is a maximal ideal in $R_{S}$. Suppose $<\phi(L)>$ is not maximal ideal, we have $<\phi(L)>\subsetneqq M$ where $M$ is maximal ideal in $R_{S}$. Then if the statement in the first paragraph is true, then we have $\phi^{-1}(M)=L$. How can we accurately prove that this is a contradiction. Appreciate it in advance.
We know the prime ideals of $R_s$ are 1:1 correspondence to the prime ideal of the prime ideals of $R$ which has no elements of $S$.So what you said before may be trivial.
We know the prime ideals of $R_s$ are 1:1 correspondence to the prime ideal of the prime ideals of $R$ which has no elements of $S$.So what you said before may be trivial.
Showing that $x^2\cos^2(\pi/(x^2))$ is not of bounded variation on $[0,1]$ Problem: Let $f(x) = x^2\cos^2(\pi/(x^2))$. Prove that $f$ is not of bounded variation on $[0,1]$ even though $f$ is differentiable on all of $\mathbb{R}$. Visual Intuition: The following is a visual of $f(x)$ that provides intuition for how $f$ could not be of bounded variation on $[0,1]$. Clearly our attention should be on the behavior of $f(x)$ when $x$ is near $0$. Attempt: Recall that $T_0^1(f)$ denotes the total variation from $0$ to $1$ on $f$ and is defined as follows: $$ T_0^1(f) = \sup \left\{\sum_{k=1}^n \left| f(x_i) - f(x_{i-1}) \right|\right\} \text{ across all subdivisions of $[0,1]$} $$ My idea is to show that $$ T_0^1(f) \ge \sum_{n=1}^{\infty} T_{1 \over n+1}^{1 \over n}(f) = \infty $$ But I'm not sure how to execute on this. $\fbox{EDIT: New proof using fgp's comments:}$ We have$$ f(x) = x^2\cos^2\left(\tfrac{\pi}{x^2}\right) \text{.} $$ So let $x_k = \sqrt{\frac{1}{k}}$. Then $f(x_k) = \frac{1}{k}\cos(\pi k)$, which means $$ f(x_k) = \begin{cases} \frac{1}{k} &\text{if $k$ is even (because then $\cos \pi k = 1$)} \\ -\frac{1}{k} &\text{if $k$ is odd (because then $\cos \pi k = -1$).} \\ \end{cases} $$ Thus we have $$ T_0^1(f) \geq \sum_{n=1}^\infty |f(x_{2n}) - f(x_{2n+1})| = \sum_{n=1}^\infty \frac{1}{2n} + \frac{1}{2(n+1)} \geq {1 \over 2} \sum_{n=1}^\infty \frac{1}{n} = \infty \text{.} $$ which completes the proof.
As a first step, let's look at $$ f(x) = x\cos\left(\tfrac{a}{x}\right) \text{.} $$ Let $x_k = \frac{a}{k\pi}$. Then $f(x_i) = \frac{a}{k\pi}\cos(\pi k)$, which means $$ f(x_k) = \begin{cases} \frac{a}{k\pi} &\text{if $k$ is even (because then $\cos \pi k = 1$)} \\ -\frac{a}{k\pi} &\text{if $k$ is odd (because then $\cos \pi k = -1$).} \\ \end{cases} $$ Thus (assuming $a\pi \geq 1$, if that isn't the case, just start the sum at some other index. That won't change anything, it'll still diverge)$$ T_0^1(f) \geq \sum_{n=1}^\infty |f(x_{2n}) - f(x_{2n+1})| = \sum_{n=1}^\infty \frac{a}{2n\pi} + \frac{a}{2(n+1)\pi} \geq \frac{a}{2\pi} \sum_{n=1}^\infty \frac{1}{n} = \infty \text{.} $$ For $x^2\cos^2\left(\frac{\pi}{x^2}\right)$, just expand $\cos^2(x)$ (it's something like $\frac{1}{2} + \frac{1}{2}\cos(2x)$ or so), and then substitute $u=x^2$.
Just compute the derivative and integrate its absolute value and you'll be able to proof that it (the function) is unbounded. The latter is easiest done by substituting z=1/x^2 and using a simple estimated lower bound for the contribution of each interval between two integers.
Sum of series $\sin x + \sin 2x + \sin 3x + \cdots $ Please help me compute the sum of the series: $$\sin(x)+\sin(2x)+\sin(3x)+\cdots$$
The series does not converge for all $x$. There are some $x$, for instance $x=0$ or $x=\pi$, for which the series converges to $0$, however if we consider $x=\frac \pi 2$ we find that our series is $1+0+-1+0+1+\cdots$ which does not converge. If you think of the unit circle, imagine a line whose angle from the positive x-axis is the value of $x$. Then the x-coordinate of this point on the unit circle is the value of $\sin x $. Doubling the angle yields a point whose x-coordinate is $\sin 2x$. Tripling it yields a point whose x-coordinate is $\sin 3x$. Continuing this, you can see that the pattern will continue with varied positive and negative values, not approaching any particular limit unless the line representing an angle of $x$ was aligned with the positive or negative x-axis. (This is not rigorous, but can be made to be so.) More technically, we have that $\lim_{n\to\infty} \sin nx =0$ iff $x=k\pi$ for some $k\in\Bbb Z$, so the series trivially converges to zero for such $x$ and diverges for all other $x$.
There is much more easier way to prove that the series does not converge when $x \ne k\pi$. $\liminf\limits_{n\to\infty}\sin nx=-1$ The corresponding subsequence is $n_k=-\frac{\pi -4\pi k}{2x}$ $\limsup\limits_{n\to\infty}\sin nx=1$ The corresponding subsequence is $n_k=\frac{\pi + 4\pi k}{2x}$ Therefore there is no limit of $\sin nx$ and hence a necessary condition of convergence is violated.
If $X,Y\subset\mathbb{R}$ are not measure zero sets, how can I show that $X\times Y\subset \mathbb{R}^2$ is not a measure zero set too? If $X,Y\subset\mathbb{R}$ are not measure zero sets, how can I show that $X\times Y\subset \mathbb{R}^2$ is not a measure zero set too? or (the following is an easy case) If $X\subset\mathbb{R}$ is not a measure zero set, how can I show that $X\times\mathbb{R} \subset \mathbb{R}^2$ is not a measure zero set too? How can I show the assertion by using the definition of measure zero (as follows)? (A subset $Z\subset \mathbb{R}$ is a mesuare zero set if $\forall\varepsilon>0$, $\exists$ countable open intervals $I_1, I_2, \cdots$ s. t. $Z\subset \cup_k I_k$ and $\sum_{k}length[I_k]<\varepsilon$).
We want to prove that if $X$ and $Y$ are Borel measurable sets and have nonzero measure, then $X \times Y$ has nonzero measure. Take any set $X$ of nonzero measure and now ask yourself what sets $Y$ behave nicely. Of course if $Y$ is an interval your approach gives you the desired result. Suppose that $\lambda(X \times Y) = 0$ and choose intervals $I_k, J_k$ such that $$X \times Y \subset \bigcup_k I_k \times J_k$$ where the latter union is disjoint and it holds that: $\sum \lambda(I_k \times J_k) \le \epsilon.$ Without loss of generality you can take all $J_k = Y$ and you would get $\lambda(X) \le \sum \lambda(I_k) \le \epsilon / \lambda(Y).$ So since $Y$ has nonzero measure you would get that $X$ has zero measure. This is a contradiction. Now you can build the class of sets that behave nicely, i.e the class $\mathcal{M}$ of all the sets $Y$ such that if $X \times Y$ has measure zero than $Y$ must have measure zero: $$\mathcal{M} = \{ Y \subset \mathbb{R} \text{ measurable, s.t. either } \lambda(Y) = 0 \text{ or } \lambda(X \times Y) > 0\}$$ This class is closed under countable monotone unions and intersections and contains intervals. Hence it is a monotone class which contains intervals (which in turn form a $\pi-$system). Hence by the monotone class theorem it contains all Borel sets on the real line.
If $E\subset\mathbb{R}^{p+q}\quad$and$\quad mE=0$then $mE_x=0$a.e. in $\mathbb{R}^p$ where$ E_x=\{y:y\in\mathbb{R}^p (x,y)\in E\}$
Does $\pi$ satisfy the law of the iterated logarithm? It is widely conjectured that $\pi$ is normal in base $2$. But what about the law of the iterated logarithm? Namely, if $x_n$ is the $n$th binary digit of $\pi$, does it seem likely (from computer experiments for example) that the following holds? $$\limsup_{n\rightarrow\infty} \frac{S_n }{\sqrt{n\log\log n}}=\sqrt{2}\quad\text{where}\quad S_n=2(x_1 + \ldots + x_n) - n$$ What about other (conjectured) normal numbers like $e$ and $\sqrt{2}$? I am sorry if this is too easy, but I tried to search for it and I could not find in on the Internet. I suppose I could run an experiment myself, but I assumed this is well known, and I would need to brush up on my programming skills to do so... Update 8/9/2013: I found a website with the first 32,000 binary digits of $\pi$ and (using a spreadsheet program) graphed out the average of the bits $S_n/n$, comparing it to $\sqrt{\frac{2 \log \log n}{n}}$. The results were inconclusive. The average never got close to $\sqrt{\frac{2 \log \log n}{n}}$ (except at the very beginning when it was way past it). However, I had the same result with a source of randomness (the one built into the spreadsheet program). My conclusion is that 32,000 bits is not enough to see if the law of the iterated logarithm (experimentally) holds for $\pi$. (The picture in the Wikipedia article uses at least $10^{50}$ bits, and the pattern is clear at about $10^{12}$ bits. However, I don't know where to get even 1,000,000 binary digits of $\pi$ on the Internet. [End Update] Also, I am sorry that I really don't know how to properly tag this.
I favourite'd this two years ago and recently stumbled upon it again. Here's some code I wrote to answer this numerically. The dotted line is $\sqrt{2 \log\log n/n}$ The solid line is $S_n/n$ $S_n = 2 \sum_{k=1}^n x_k - n$ where $x_k$ is the k-th decimal digit of Pi in binary. You should be able to zoom in by running the code. You'll need matplotlib and numpy. I used y-cruncher to generate the digits.
I am not sure if this is the answer you are looking for (or if you want something more computational specifically for $\pi$), but if a number is normal in base 2, then the sum $S_n$ that you defined above will satisfy the law of the iterated logarithm. First, let $X_i$ be the $i$th binary digit. Assuming normality, the digits $X_i$ are iid and $X_i\sim\text{Bernoulli}(1/2)$. Then, let $Y_i = 2X_i-1$. Consequently, $\mathrm{EY_i=0}$, $\text{Var}(Y_i)=1$, and $S_n = Y_1+\ldots+Y_n$. Consequently, $$ \mathrm{P}\left( \limsup_{n\rightarrow\infty} \frac{S_n}{\sqrt{2n\log\log n}}=1 \right)=1. $$ This result and an accompanying proof, which relies on the result for Brownian motion and Skorokhod embedding, can be found in Section I.16 of Rogers and Williams' Diffusions, Markov Processes and Martingales: Volume 1. In general, if a number is normal in base $b$, then the digits $X_i$ have a discrete uniform distribution on $0,1,2,\ldots,b-1$. Hence, $\mathrm{E}X_i=\frac{1}{2}(b-1)$ and $\text{Var}(X_i) = \frac{1}{12}(b^2-1)$. Therefore, setting $$ Y_i = \frac{2\sqrt{3}\left(X_i-\frac{1}{2}(b-1)\right)}{\sqrt{b^2-1}}, $$ gives $Y_i$ zero mean and unit variance. The sum of the $Y_i$ $$ S_n = \sum_{i=1}^n Y_i = \frac{2\sqrt{3}}{\sqrt{b^2-1}}\sum_{i=1}^n X_i -n\sqrt{3}\sqrt{\frac{b-1}{b+1}}, $$ once again satisfies the law of the iterated logarithm.
Find the homogeneous linear DE with constant coefficients of least order that has $y = 1 + 2e^{-2x}\cos x$ as a solution. Find the homogeneous linear DE with constant coefficients of least order that has $$y = 1 + 2e^{-2x}\cos x$$ as a solution.
Your solution function has terms for the eigenvalues $0,-2\pm i$ without polynomial factors for multiplicities, so that your DE can be read off as $$ (D-0)(D+2+i)(D+2-i)y=0 $$
The general solution of a first order homogeneous DE of the form $$ y'(x) + p(x)y = q(x)\tag{1}$$ is given by $$ y(x) = \frac{1}{\mu (x)}\int \mu(x) q(x)\tag{2},$$ where $\mu(x) = e^{\int p(x) \ dx}$ is the integrating factor. \begin{align} 1+2e^{-2x} \cos(x) &= \mu^{-1} (x) \int \mu(x) (0) \ dx = \frac{C}{\mu(x)} \\ \implies& \mu(x) = \frac{C}{1+2e^{-2x}\cos(x)}. \end{align} Therefore, \begin{align} \mu(x) = e^{\int p(x) \ dx} \implies \frac{C}{1+2e^{-2x}\cos(x)} &= e^{\int p(x) \ dx}\\ \ln\biggl |\frac{C}{1+2e^{-2x}\cos(x)} \biggr| &= \int p(x) \ dx\\ \frac{d}{dt}\ln\biggl |\frac{C}{1+2e^{-2x}\cos(x)} \biggr| &= p(x) \\ \therefore \frac{2\sin(x)+4\cos(x)}{e^{2x}+2\cos(x)} &= p(x). \end{align} Finally , using $(1)$, the initial differential equation is $$y' + \left(\frac{2\sin(x)+4\cos(x)}{e^{2x}+2\cos(x)} \right)y = 0$$
Classify the critical point $x=2$ of $f(x)=(x-2)^{17}\left(x+5\right)^{24}$ Question Let $f(x)=(x-2)^{17}(x+5)^{24}$. Then is $x=2$ a maximum, minimum, or neither? Book's Approach $f'''\left(2\right)\ne 0$. S Since the odd integral derivative of the function is non-zero, $x=2$ is neither a minimum nor maximum Please Explain How did they conclude odd integral derivative of the function is non-zero? $f'''(2)\ne 0$ it is $2^{nd}$derivative. I don't how they did it.
The hint: For $x>2$ we have $f(x)>0$ and for $x<2$ around $2$ we have $f(x)<0$.
Just a typo in the book. Should say: $f''(2)=0$, therefore neither maximum, nor minimum. f"(x) < 0 - local maximum f"(x) > 0 - local minimum f"(x) = 0 (your case)- neither of those
Usage of $\Rightarrow$ – Implication or type declaration? I have another notational question about the usage of $\Rightarrow$. This symbol is usually understood to indicate "material implication" (that is, $A\Rightarrow B$ means that the truth value of $B$ is at least as big as $A$, there hasn't to be a causal link between the two statements $A, B$). $A\Rightarrow B$ is often prononounced "If $A$, then $B$", and if there are free variables involved, then these are implicitly understood to be universally quantified. Now, I noticed that "If $A$, then $B$" is not always a material implication in mathematics. It could as well mean a type declaration (or universal quantification): The statement "If $f$ is a function $A\to B$ and $A$ a nonempty set, then: blablabla" shouldn't be understood to mean "For any $f$, if $f$ happens to be a function with nonempty domain, then blablabla". Also, "For all real numbers x, ..." shouldn't mean "for any object x, if x is a real number, then ..." Question: When "If $A$, then $B$" is used in such a context where it shouldn't be understood as a material implication, but as a type declaration (or universal quantification), can this also be abbreviated by the $\Rightarrow$ sign? For example, I think I've seen in the definition of the term "first-order formula" $$\text{$\phi$, $\psi$ formulae $\Rightarrow$ $\phi\land\psi$ formula}$$ But on the other hand, it would seem strange to abbreviate the statement "For all natural numbers $n$, we have $n^0 = 1$" by $$n\in\mathbb N\Rightarrow n^0 = 1.$$
I am going to give two technically grounded arguments. One contradicting what you state, and one agreeing with what you state. They come to roughly the same conclusion (for different reasons). First, the set theory view. In (material) set theory, e.g. ZFC, all the things you say "shouldn't be understood to mean" or "seem strange" are completely legitimate. While $f : A \to B$ is not usually a primitive of set theory (but it is in some), it is completely reasonable to define it as $$f\text{ is a function }\land \pi_1(f) = A \land \pi_2(f) \subset B$$ where "is a function" is an abbreviation for the set theoretic definition of a function as a binary relation satisfying certain properties and $\pi_1$, $\pi_2$ are projections lifted to sets. The upshot is: it very much can be viewed as stating "filter $f$ to the things that satisfy this property". The next example is even simpler. $\forall x \in \mathbb{R}.P(x)$ is often (in set theory) defined to mean $\forall x.x\in \mathbb{R} \Rightarrow P(x)$. This also addresses your last example: $\forall n.n \in \mathbb{N} \Rightarrow n^0 = 1$ is a completely legitimate formal term of ZFC (given suitable standard definitions). So this is not strange, this is what set theorists do all the time. Now, a type theorist says all the above was drivel. In type theory a statement like $$n :\mathbb{N} \Rightarrow n^0 = 1$$ is simply syntactically meaningless. That is, it's not even a well-formed formula even in a context where $n$ is bound. A type declaration like $n : \mathbb{N}$ is not a proposition that can be true or false. In type theory, there is no alternative to $\forall x:\mathbb{R}.P(x)$. To a type theorist $(x \mapsto x^2) : \mathbb{R} \to \mathbb{R}$ and $(x \mapsto x^2) : \mathbb{R} \to \mathbb{R}^+$ are just totally different things even though they are literally identical as set theoretic functions. But in (dependent) type theory there is the dependent product type often written $\Pi x:T.E(x)$ for some type $T$ and term $E$. In (particularly constructive) type theory it is common for function types, universal quantification, and implication all to be special cases of $\Pi$-types. In most implementations of type theory, $A \to B$ is defined to be $\Pi x:A.B$ (where $x$ is not free in $B$, i.e. we just ignore $x$). In NuPRL, $\forall$ and $\Rightarrow$ are simply $\Pi$ and $\to$ at the type Prop. (NuPRL uses the syntax $x:A \to B(x)$ for $\Pi x:A.B(x)$.) To summarize, in (material) set theory, the cases you are worried about are fine; a lot of high level syntax expands to the very constructs you are worried about. That said, implication and universal quantification and functions, while having interrelations are quite distinct concepts. In type theory, all of these concepts are literally special cases of the same concept.
$A\Rightarrow B$ is often prononounced "If $A$, then $B$", and if there are free variables involved, then these are implicitly understood to be universally quantified. Free variables are universally quantified over their entire deduction, not the formula they appear in. Now, I noticed that "If $A$, then $B$" is not always a material implication in mathematics. Yes it is. It could as well mean a type declaration (or universal quantification): No it can't. The statement "If $f$ is a function $A \to B$ and $A$ a nonempty set, then: blablabla" shouldn't be understood to mean "For any $f$, if $f$ happens to be a function with nonempty domain, then blablabla". Of course they aren't the same, you left off the restriction about the domain and codomain of $f$. Also, "For all real numbers x, ..." shouldn't mean "for any object x, if x is a real number, then ..." Yes it does mean that. Who told you it doesn't? Don't listen to that person. Question: *When "If $A$, then $B$" is used in such a context where it shouldn't be understood as a material implication, but as a type declaration (or universal quantification), There is no such situation. But on the other hand, it would seem strange to abbreviate the statement "For all natural numbers $n$, we have $n^0 = 1$" by $$n\in\mathbb N\Rightarrow n^0 = 1.$$ Those are equivalent statements as long as no where else is anything assumed about $n$. There is a very surprising isomorphism between $\Rightarrow$ in logic and $\rightarrow$ in set theory. But they are not the same thing. $A \rightarrow B$ is defined in set theory to be the set of all function with a domain of $A$ and a codomain of $B$, and is sometimes written $B^A$. For example: $$\{x, y\} \rightarrow \{1, 2\} = \{ \{(x, 1), (y, 1)\}, \{(x, 1), (y, 2)\}, \{(x, 2), (y, 1)\}, \{(x, 2), (y, 2)\}\}$$
distributive law in polish notation On page 18 "Logic as Algebra" Halmos&Givant wrote the distributive law in Polish notation as $$ = \times a + bc + \times ab \times ac $$ I fail to see anything remarkable here, is there a combinatorial pattern that I'm missing?
I'm not quite sure what you're looking for, but here's a bit about the distributive laws. The polish notation: $=\times a+bc+\times ab \times ac$ in standard infix notation is: $a \times (b+c) = a \times b + a \times c$. What this means is one gets the same result if one multiplies then adds or adds then multiplies. Using abstract algebra terminology, the distributive laws say that multiplication operators are group homomorphisms addition preserving maps. An group homomorphism (using additive notation) addition preserving map is a function such that $\varphi(b+c)=\varphi(b)+\varphi(c)$ (you can add then map or map then add). So in polish notation this is: $= \varphi + b \; c + \varphi \; b \; \varphi \; c$. This is the general pattern you're looking for (I guess). This property is capturing a kind of commutativity: It doesn't matter which is done first: map or add. Or in your context, it doesn't matter which is done first: add or multiply.
I'd say that Polish notation (or reverse Polish notation) makes it clearer that the "the outer operation moves to the inside and doubles, and the inner operation moves to the outside" when applying distributivity. Syntactically speaking, expressions in Polish notation (and reverse Polish notation) come as almost always clearer than in ubiquitous infix notation. If you consider ax(b+c)=ab+ac, or a(b+c)=(axb)+(axc) and most other infix notation expressions you have to consider more than just the syntax of the expressions involved (such as what the author intends to say, which he may have stated earlier in the text, however, this is not part of the syntax of expressions). In other words, if you interpret either ax(b+c)=ab+ac or a(b+c)=(axb)+(axc) purely in terms of what your formation rules (rules for what well-formed formulas, or formulas) say exactly, they'll come out as nonsensical, or at best as ambiguous. On the other hand in Polish notation, so long as you know the arity of the operations, predicates/relations (for "=") something like =×a+bc+×ab×ac comes as clear from basically the formation rules. For instance, with this condensed formation rule If "p" and "q" all represent well-defined values, then +pq, ×pq, and =pq each represent well-defined values. and this axiom: "a", "b", and "c" represent well-defined values, then you can formally prove =×a+bc+×ab×ac as well-defined also as follows: 1 +bc by rule 1, and axiom 2 2 ×ab by rule 1, and axiom 2 3 ×ac by rule 1, and axiom 2 4 +×ab×ac by steps 2 and 3, and rule 1 5 ×a+bc by axiom 2, step 1, and rule 1 6 =×a+bc+×ab×ac by steps 3 and 4 and rule 1 You have to fully parenthesize an infix expression in order to do something like this.
What's the best way to optimize this energy function, and is it convex? I have an energy function $E({\bf y})=||\,g({\bf Ay+c})-{\bf d}\,||^2_2 + ||\,{\bf y-e}\,||^2_2 + \alpha\,|{\bf y}|_1$ I need to minimize this with respect to $\bf y$, all other variables being constant. $g(\bf u)$ is a non-linear scalar function that acts on the elements of $\bf u$ independently. Typically this function is also monotonic and convex. In the simplest case, $g(\bf u) = u$, but other choices include $g({\bf u})=\log(1+\exp(\bf u))$. My first question is whether this is a convex problem. I think that it is convex if $g(\bf u) = u$, but guessing it also might be convex if this function is nonlinear but has convex form, but I'm really not sure. Then I would please like a suggestion as the quickest way to optimize for $\bf y$. I don't really know how to deal with the L1 term. The dimensionality of $\bf y$ is of the order 10,000 and $\bf A$ and $\bf B$ are typically sparse. Turns out this is not convex - see comments. Any suggestions on the best non-convex approach to use?
If $g(u)$ is convex, then you problem is convex as well: [EDIT by mcg: the linear/affine case is probably the only useful one for which this is convex.] Other cases can be convex as well. There are several simple rules to be used to detect convexity. You may want to take a look to a classic book: http://stanford.edu/~boyd/cvxbook/ The $|\cdot|_1$ is a simple regularization term that you an easily convert in a set of linear constraints: $\min |x|_1$ is equivalent to $$ \min \sum t_i\\ t_i\geq |x_i| \quad i=1,\ldots,n $$ and hence $$ \min \sum t_i\\ t_i\geq x_i \quad i=1,\ldots,n t_i\geq -x_i \quad i=1,\ldots,n $$ More details are in the book I mentioned. Than you can use any QP solver. If work on MATLAB and you do not want to work on the formulation, I would suggest you to use YALMIP or CVX. You can find
If $g(u)$ is convex, then you problem is convex as well: [EDIT by mcg: the linear/affine case is probably the only useful one for which this is convex.] Other cases can be convex as well. There are several simple rules to be used to detect convexity. You may want to take a look to a classic book: http://stanford.edu/~boyd/cvxbook/ The $|\cdot|_1$ is a simple regularization term that you an easily convert in a set of linear constraints: $\min |x|_1$ is equivalent to $$ \min \sum t_i\\ t_i\geq |x_i| \quad i=1,\ldots,n $$ and hence $$ \min \sum t_i\\ t_i\geq x_i \quad i=1,\ldots,n t_i\geq -x_i \quad i=1,\ldots,n $$ More details are in the book I mentioned. Than you can use any QP solver. If work on MATLAB and you do not want to work on the formulation, I would suggest you to use YALMIP or CVX. You can find
Mathematical function to convert two numbers into one? Is there a mathematical function that converts two numbers into one so that the two numbers can always be extracted again?
Consider the map $$(x, y)\to 2^x3^y$$ For example, $$(\color{blue}{7}, \color{green}{3})\mapsto 2^\color{blue}{7}3^\color{green}{3} = 128\cdot27 = 3456.$$ To extract the original two numbers, divide 3456 by 2 and count how many times you can do this before you are left with an odd number: $$3456 \to 1728\to 864\to 432\to 216\to108\to 54\to 27$$ That's $\color{blue}{7}$ times. Then divide 27 by 3 until you are left with 1: $$27\to9\to3\to 1$$ That's $\color{green}{3}$ times, so the original numbers were $\color{blue}{7}$ and $\color{green}{3}$. Unlike the other answer in this thread, this works for any whole numbers, of unlimited size. But the process for extracting the original two numbers by this method is slow.
It is possible if you omit the condition "always". For example, for natural numbers $n,x,y \leq 10^n$ consider the function $f(x,y)=10^{2n}x+y.$ It this case we are able to extract $x$ and $y$ from $f(x,y).$
Let $G$ a group, with ... Show that $G$ is a cyclic group. Let $G$ be a group with $|G|=455$. Show that $G$ is a cyclic group.
It is well-known that if $n$ is a natural number, there is only one group of order n if and only if $\gcd(n,\varphi(n))=1$. Here $\varphi$ is the Euler totient function. For $n=455$ this applies. If there is only one group of a particular order it must necessarily be cyclic.
Notice $455 = 13*7*5$ and we know $13$, $7$ and $5$ are prime. Now use Lagrange theorem to show that G is cyclic. This is a hint.
Prove $a ≤ x ≤ b ⇒ x ≤ a +b ∀a,b ∊ R^+$ It seems too obvious to be proved algebraically. What i thought was: We know that the sum of two positive numbers is a positive number so: $a + b > b$ and $a + b > a$ Using the hypotesis of $a > x$ and $b > x$ then $a + b > b > x$ and $a + b > a > x$ Is it correct? If you know, please tell me another way to prove it.
It seems you are on the right track, but you made one error in the final line with the $a + b > a > x$. I would solve this as: $\forall a,b \in \mathbb{R}^+$, $b<a+b$ and $a<a+b$ (definition of addition between two positive numbers) Given $x, a, b$ such that $a \leq x \leq b$ and $b<a+b$ then we can conclude through the transitive property $a \leq x < b$ or $$x<a+b \ \ \ \text{and} \ \ \ x \geq a$$ The only item I can't seem to prove is that $x \leq a+b$. I would say by the above proof, $x$ is strictly less than $a+b$.
It seems you are on the right track, but you made one error in the final line with the $a + b > a > x$. I would solve this as: $\forall a,b \in \mathbb{R}^+$, $b<a+b$ and $a<a+b$ (definition of addition between two positive numbers) Given $x, a, b$ such that $a \leq x \leq b$ and $b<a+b$ then we can conclude through the transitive property $a \leq x < b$ or $$x<a+b \ \ \ \text{and} \ \ \ x \geq a$$ The only item I can't seem to prove is that $x \leq a+b$. I would say by the above proof, $x$ is strictly less than $a+b$.
Solving $e^x + x = 5$ for $x$ without using a numerical method? Canadian economist Mike Moffat asks on Twitter: Math nerd Q: Is there a way to solve $e^x + x = 5$ for $x$, without using a numerical method?
Write $y=5-x$. Then $ye^y=e^5$. Using the Lambert W function, this gives $x=5-W(e^5)$.
Step 1. Transform the equation $e^x+x=5$ into $e^x=5-x$ Step 2. Write each side of the new equation as function : $y_1=e^x$ and $y_2=5-x$ Step 3. Graph these two functions in Desmos Graphing Calculator (you find its application in internet) Step 4. Find the intersection of the two graphs. It has an $x$ and $y$ value. Step.5 The $x$-value is the solution of our equation : $x=1.307$ (approximately)
Total Curvature of 4 pi What does it mean for a surface to have a total curvature of $4\pi $? I have seen that both the catenoid and Enneper surface are the only minimal surfaces that have this total curvature, but I don't really understand what significance this has? Could anyone please explain this?
For calculate the finite total curvature of a complete minimal surface $\phi:M\longrightarrow\mathbb{R}^3$, by start from the topology of $\bar{M}$ and the geometry of ends. An $\mathbf{end}$ of a completle minimal surface of finite total curvature is the image $c_i=\phi(D_i-p_i)$ of a sufficiently small punctured disk $D_i-p_i$ on $\bar{M}$ with centered at the point $p_i$. $\mathbf{Theorem}$. If $g$ is the genus of $\bar{M}$, $M\cong{\bar{M}-\{p_1,...,p_r\}}$ and $d_1,...,d_r$ are the multiplicities of the ends $c_1,...,c_r$, and $\tau$ the total curvature $$\tau(M)=2\pi(2-2g-r-\sum_{j=1}^rd_j)=2\pi(\chi(\bar{M})-r-\sum_{j=1}^rd_j).$$ --M=Catenoid has two ends. $r=2, d_1=d_2=1$, and $\bar{M}=\mathbb{S}^2$. Consequently $$\tau(M)=2\pi(2-2-2)=-4\pi.$$ --The M=Enneper has one end of multiplicitiy $d_1$, and $\bar{M}=\mathbb{S}^2$, $\tau(M)=-4\pi$ Consequently $d_1=3$.
Well first of all these have total curvature $-4\pi$, not $4\pi$. The Gauss-Bonnet theorem tells us that the total curvature of a surface is equal to $2\pi$ times the Euler characteristic of the surface. Moreover we know that the Euler characteristic of a connected surface is always an integer that is at most 2. So in general we might want to ask: What are all connected minimal surfaces without boundary in $\mathbb{R^3}$ of with total curvature $2\pi n$? We see this is the same as asking that the Euler characteristic be n. Well, if $n=1,2$ the answer is none, since a minimal surface has nonpositive Gaussian curvature everywhere. Now we can go down from there trying to understand the most topologically simple minimal surfaces in $\mathbb{R^3}$. For $n=0$ it's not hard to figure out that a plane is the only connected minimal surface without boundary in $\mathbb{R^3}$ that has total curvature 0. What you are saying is that these are the only two minimal surfaces in $\mathbb{R^3}$ with Euler characteristic $-2$. So we see this case still has a relatively nice answer. But as we decrease $n$ even further the story becomes increasingly complicated, and a general answer is unknown.
What Exactly Are Quotient/Factor Groups and Rings? I'm having a lot of trouble wrapping my head around this one. The mathematical definition is that $G'$ is a factor group of $G$ if $G'$ = $G$/$N$, where $N$ is some normal subgroup of $G$. $R'$ is a factor ring of $R$ if $R'$ = $R$/$I$, where $I$ is an ideal. My question is, just "what the hell" are they exactly? Are we taking all elements of $N$ (and similarly $I$) and taking them out of $G$ and $R$? If so, why? Why is that important? If not, what exactly are we doing when we create a factor group/ring? I could really use some help visualizing these. Thank you.
Another important way to think about quotient groups: they are precisely the images of group homomorphisms. This is a consequence of the first isomorphism theorem: if $G$ and $H$ are groups and $\phi : G \to H$ is a group homomorphism, then $G/K \cong \operatorname{im}\phi$, where $K = \operatorname{ker}\phi$. Conversely, if $N$ is any normal subgroup of $G$, then there is a natural homorphism $\phi : G \to G/N$ given by $\phi(g) = gN$. The image of this homomorphism is $G/N$, and the kernel is $N$. Similar remarks apply to quotient rings.
The basic idea is that whenever you have a sub-group(ring), this sub-object allows you to classify the elements of the big group(ring) into different set-theoretical classes or "boxes". How? In the easiest way you can imagine: consider a sub-group $H\subset G$ and $a,b\in G$. We say these elements are in the same box (class) iff the sets $aH=bH=\{bh:h\in H\}$. The same is done with rings, but using the commutative structure (if $I\subset R$ and $a,b\in R$, then $a\sim b$ iff $a-b\in I$). Now, these are just set-theoretical decompositions of your original algebraic objects, but if additionally the sub-group(ring) is normal(ideal) these boxes inherit the algebraic structure from the original group(ring)!
Does there exist a vector space $V$ which is isomorphic to $\mathbb{R}^m$, but is not a topological/smooth manifold of dimension $m$? Does there exist a vector space $V$ which is isomorphic to $\mathbb{R}^m$, but is not a topological/smooth manifold of dimension $m$? Suppose $V$ is isomorphic to $\mathbb{R}^m$. In the category of vector spaces, $V$ is equivalent to $\mathbb{R}^m$, but $V$ may not have a topology on it, so it may not be a topological manifold, and hence it can definitely not be a smooth manifold (since all smooth manifolds are just topological manifolds endowed with a smooth structure). If $V$ is homeomorphic to $\mathbb{R}^m$, then we can conclude in that $V$ is equivalent to $\mathbb{R}^m$ in the category of topological spaces, and $V$ must also be a topological manifold, however since every topological manifold is not a smooth manifold (see the Exotic Sphere as an example), we can't necessarily conclude that $V$ is diffeomorphic to $\mathbb{R}^m$. So my question is the following, does an isomorphism between $V$ and $\mathbb{R}^m$ induce a homeomorphism between $V$ and $\mathbb{R}^m$, and furthermore does a homeomorphism between $V$ and $\mathbb{R}^m$ (along with the isomorphism between the two vector spaces) induce a diffeomorphism between $V$ and $\mathbb{R}^m$? If not could someone provide me with an example of a vector space $V$ which is isomorphic to $\mathbb{R}^m$, but not homeomorphic, and also of an example of a vector space $V$ which is homeomorphic and isomorphic to $\mathbb{R}^m$ but not diffeomorphic to $\mathbb{R}^m$? Note: Isomorphism here means isomorphism of vector spaces.
You can give whatever topology/smooth structure to $\mathbb R^n$ as you like without affecting it's vector space structure. Consider $X=\mathbb R^n$ with the trivial topology, i.e. only two open subsets: the empty set and $X$. Then, this is not homeomorphic to the $Y=\mathbb R^n$ with the usual Euclidean topology. Then these are not homeomorphic spaces for many reasons (eg: $X$ is compact but $Y$ is not). Again, let $X'=\mathbb R^n$ and find a bijection with $\mathbb R$ and use this bijection to make $X'$ a $1$ dimensional manifold. Of course this cannot be homeomorphic to $Y=\mathbb R^n$ which is an $n$-dimensional manifold for $n>1$.
No. Change of base in V gives you a smooth function in $\mathbb{R}^n$. So you have a smooth structure which is diffeomorphic to $\mathbb{R}^n$
Yearly contribution to reach given stock (compound interest) I'm working on a calculator to compute the final stock given a yearly contribution (either a fixed contribution or with a linear growth). The user provides the contribution for the first year, if he wants to increase this contribution yearly (the growth value is fixed, the user only chooses if he wants it or not), the interest rate and the number of years (compounding frequency is always yearly). Calculate the final stock given those imputs isn't difficult, I simply use the Compound Interest formula for each year from $zero$ to $t$ and accumulate the results: $$F = Py(1+i)^t$$ Where: $F =$ Final value $Py =$ Contribution on year $y$ (same each year or increased by a percentage) $i =$ Interest rate $t =$ Remaining years But I don't have a clue about how to calculate (or at least approximate) the other way around. I.e. calculate the needed yearly contribution to reach a desired final stock, given the interest rate and the number of years, either with fixed contributions or increasing ones.
Let g be the (constant) growth rate and $G=1+g$ the growth factor. And let r be the interest rate and $q=1+r$. Then the future value of the growing contributions is $F_n=C\cdot q^{n-1}+ C\cdot q^{n-2}\cdot G+\ldots + C\cdot q\cdot G^{n-2}+C\cdot G^{n-1}$ $C\cdot q^{n-1}:$ The first contribution has to be compounded n-1 times and no growth. $C\cdot G^{n-1}:$ The n-th contribution has grown n-1 times and no compounding. $F_n=C\cdot \left[ q^{n-1}+ q^{n-2}\cdot G+\ldots + q\cdot G^{n-2}+ G^{n-1} \right]$ $q\cdot F_n=C\cdot \left[ q^{n}+ q^{n-1}\cdot G+\ldots + q^2\cdot G^{n-2}+ q \cdot G^{n-1} \right] \quad \quad \quad (1)$ $G\cdot F_n=C\cdot \left[ \ \ \quad q^{n-1}\cdot G+ q^{n-2}\cdot G^2+\ldots + q\cdot G^{n-1}+ G^{n} \right] \quad (2)$ Substracting (2) from (1) $(q-G)\cdot F_n=C\cdot (q^n-G^n)$ $\boxed{F_n=C\cdot\frac{q^n-G^n}{q-G}}$
Your formula is $\sum_{t=0}^n P_y(1+ i)^t$. That is the same as the "geometric sum" $\sum_{t= 0}^n ar^t$ for which a formula is known: $\frac{a(1- r^{n+1}}{1- r}$. Here $a= P_y$ and r= 1+ I so that becomes $F=\frac{P_y(1- (1+ i)^n)}{i}$. So $P_y= \frac{Pi}{1- (1+i)^{n+1}}$
Proving a function is continuous for all real numbers This a homework question: Prove $f(x) = 2x^3 - 4x^2 + 5$ is continuous for all real numbers. Which proof technique do I use for this? I don't really know where to start.
Hint: If $f$ and $g$ are continuous, then so are $f + g, f - g, f \cdot g, f / g$ (the later provided $g$ isn't equal to $0$).
As $f(x)=2x3−4x2+5$ is a polynomial function, it is continuous at every real number (it is a theorem).
Does $\lim_{k\to\infty} \sum_{n=1}^{k} f(n)$ converge? I am wondering if there is a rigourous way to show the following series converges: $$ f(x) = \begin{cases} \dfrac{1}{x^2} & x \in \mathbb{N} \\[1ex] g(x) & x \not\in \mathbb{N}\in \mathbb{R} \end{cases}$$ Here, $g(x)$ could be anything, but for interest, wouldn't converge on its own. When we do the following: $$\lim_{k\to\infty}\sum_{n=1}^{k}f(n).$$ Do we get a limit that exists? Intuitively we would, because summation is only defined on the natural numbers (unless we redefined it here, doing some interpolation between points). However, I am not certain this function is integrable, which would suggest the function does not converge (unless my understanding of the Integral Test is wrong, after all, it says nothing of whether the integral exists, just if it converges the sum converges and vice-versa).
To address your confusion, we note that the integral test requires that $f(x)$ must be monotone decreasing over the interval $\left[1, \infty\right)$ to be applicable. Check this link: https://en.wikipedia.org/wiki/Integral_test_for_convergence Hence, if we have $$f(n) = \frac{1}{n^2}$$ for all $n \in \mathbb{N}$, and you want to extend it to $\mathbb{R}$ by setting $f(x) = g(x)$ for all $x \in \mathbb{R} \setminus \mathbb{N}$, then you have to make sure that the selection of $g(x)$ keeps $f(x)$ monotone decreasing over the interval $\left[1, \infty\right)$ so that the integral test can kick in. If the ultimate goal of extending $f(n)$ to the domain $\mathbb{R}$ as $f(x)$ is to hopefully construct $f(x)$ so that we can use the integral test to prove that the series $$\sum_{n = 1}^{\infty} f(n) = \lim_{k \to \infty} \sum_{n = 1}^{k} f(n)$$ converges, then in this case, the way you extend $f(n)$ to $f(x)$ matters. If you can prove that there exists at least one way to extend $f(n)$ to $f(x)$ such that $f(x)$ is monotone decreasing over the interval $\left[1, \infty\right)$, then this is sufficient to invoke the integral test to prove that $$\sum_{n = 1}^{\infty} f(n) < \infty$$ For instance, you can trivially take $g(x) = 1/x^2$ for $x \in \mathbb{R} \setminus \mathbb{N}$ so we have $f(x) = x^2$ for all $x \in \mathbb{R}$ which is monotone decreasing. But if you just just want to know if the limit exists, then you can do so without extending $f(n)$ to $\mathbb{R}$. I mean, you can do so without invoking the integral test. Evaluating this sum is known as the Basel problem and the result is $\pi^2/6$. Check this link: https://en.wikipedia.org/wiki/Basel_problem
About the series, $$\sum_{n\ge 1} f(n)=\sum_{n\ge 1}\frac{1}{n^2}$$ converges as a Riemann series $(2>1)$. As an integral, we can say nothing. It depends on the expression of $ f(x).$ Take for example $ f(x)=1$ if $ x\notin \Bbb N$, then $$\int_1^{+\infty}f(x)dx \;\; diverges$$ and If $$(\forall x\ge 1)\;\; f(x)=\frac{1}{x^2}\; $$ the integral $$\int_1^{+\infty}f(x)dx\;\; converges$$
How to prove a subspace? Let X be a Banach space and let M be a subset of X. Then M is itself a Banach space (using the norm from X) if and only if M is closed subspace. How to prove (=>) to show that M is closed subspace? Question:If I have M is subset of X then I have M is a subspace???? If this statement is true, I can prove it. Thanks. :-)
Suppose $M$ is closed. You have to show $M$ is complete. Let $x_n$ be a Cauchy Sequence in $M$. Since $X$ is a Banach space, $x_n$ converges to some $x\in X$. Since $M$ is closed, $x\in M$. So $M$ is complete. Suppose $M$ is a Banach space. You have to show $M$ is closed in $X$. Since $X$ is a normed space it suffices to show that, if $x_n$ is a sequence in $M$ which converges in $X$ to some $x\in X$, then if follows that $x\in M$. So suppose $x_n$ is convergent to $x$. Then $x_n$ is a Cauchy Sequence in $X$, but since $M$ carries the same norm, in $M$ as well. Since $M$ is a Banach space, $x_n$ converges in $M$ to $x^\prime \in M$. Since $M\subset X$ with the induced topology, $x_n\rightarrow x^\prime$ in $X$. By uniqueness of the limit, $x = x^\prime$, so $M$ is closed.
To $M$ be a subspace you need that $\forall x,y,\in M $ and $\forall \alpha$ scalar $x+y\in M$ and $\alpha x \in M$. But, it doesn't mean that $M$ is a Banach space. To be a Banach space one needs that $M$ be a complete space, that's why one requires that $M$ be closed. To show that $M$ is closed you need to show that every Cauchy sequence converges to a point in $M$.
Does the Cauchy Schwarz inequality hold on the L1 and L infinity norm? So i am wondering if the Cauchy Schwarz inequality holds for all p-norms, not just when p=2, which is the euclidean space. Thank you.
Cauchy Schwarz inequality can be generalized as follows: \begin{equation}\label{d} |x^\top y|\leq \|x\|\|y\|_{\star}, \forall x,y \in \mathbb{R^{n}} \end{equation} where $\|\|_\star$ is called the dual norm. Since dual of $l_2$ norm is itself, using $l_2$ in above we reach to Cauchy Schwarz inequality. Since $l_1$ and $l_\infty$ are dual, we obtain: $$|x^\top y|\leq \|x\|_1\|y\|_\infty$$
Among other things, this question is easily construed as a reasonable one, about the continuity of pairings $\langle, \rangle \colon V\times V^*\to \mathbb C$. Among other true things that can be said, it is a general fact that separately continuous pairings on Frechet spaces turn out to be jointly continuous for general reasons. ... thus... what is the real question?
Reference for Functional Analysis I am in search for Functional Analysis Books. I need a book which have lots of elementary problems. I mean it should force us to think more which will give a deeper insight for functional analysis. To be precise, I need lots of problems like Is $A$ open in the space $\ell^1$ with respect to norm $\|.\|_2$. Find a counterexample to this statement and etcetera. I mean it shouldn't ask us trivial questions. It should ask us questions which will force us to think more and more from functional analysis
The most elementary book for functional analysis I've seen is Introductory functional analysis of E.Kreyszig, really really understandable and its problems are 'basics'.
If you're new, try Simmons's book. Other than that both books of Walter Rudin and John B. Conway consists of many non-trivial questions. P.R. Halmos's "A Hilbert Space Problem Book" by springer is difficult to me atleast.
Solving Recurrences using Telescoping/Backwards Substitution Specifically, $$T(n)=3T(n-1)+1; \quad T(1)=1.$$ I have \begin{align*} T(n) & = 3T(n-1)+1 \\ & = 3(3T(n-2)+1)+1 \\ & = 9T(n-2)+4 \\ & = 9(3T(n-3)+1)+4 \\ & = 27T(n-3)+13 \\ & = \cdots \\ & = (3^k)T(n-k)+(3^k - 1). \end{align*} Am I on the right track? I feel like something is off, b/c it doesn't work for the base case, but I don't know where I went wrong? and is there any hard and fast rule about how far you should "telescope" backwards? Thanks in advance
Here is a general method. You are given $T_n=A(T_{n-1})$ for some affine function $A$, defined by $A(x)=ax+b$, and you want to iterate $A$ because you know that $T_n=A^{n-1}(T_1)$ for every $n\geqslant1$. This is doable because: Affine functions are conjugate to linear functions. Linear functions are easy to iterate, to wit, if $L:x\mapsto cx$, then $L^n(x)=c^nx$. Thus, the task is to find $L$ linear such that $A$ and $L$ are conjugate. Here is the only bit to remember: The function $A$ is conjugate to a linear function $L$ through a translation, for every $a\ne1$, and the translation should send the fixed point of $L$, which is $0$, to the fixed point of $A$. Hence the first task is to find the fixed point $z_A$ of $A$: this solves $z_A=A(z_A)=az_A+b$, that is, $z_A=b/(1-a)$ (since $a\ne1$). And now, behold, for every $t$, $A(z_A+t)=A(z_A)+at=z_A+at$. Iterating, one sees that, for every $n$, $A^n(z_A+t)=z_A+a^nt$ for every $t$, that is, $A^n(x)=z_A+a^n(x-z_A)$. (Once again, this algebraic miracle occurs due to the conjugation of $A$ to $L:t\mapsto at$.) Here, $(a,b)=(3,1)$ hence $z_A=-\frac12$ and $T_n=A^{n-1}(1)=z_A+3^{n-1}(1-z_A)=\frac12(3^n-1)$.
There is a simple way to make this kind of recurrence homogenous. Let $A(n)=T(n) +a$ such that the recurrence satisfied by it is homogenous. We get, $a=3a+1$,$a=-\frac{1}{2}$ The question is simplified to: $A(n)=3A(n-1)$ which is easy to solve.
Average of Random variables converges in probability. Let $(\Omega, F, P)$ be probability theory. Suppose that $X_1, X_2, X_3,...$ be sequence of random variable and $E(X_i)=0$ for all $i\in \mathbb{N}$. Let $Y_n=\frac{X_1+X_2+... +X_n}{n}$. claim . $Y_n$ converges to 0 in probability. Let $A_n=\{w : |Y_n(w)| > \epsilon >0\}$ $$\int _{A_n} |Y_n| dP \geq \int_{A_n} \epsilon dP \geq P(A_n) \epsilon$$ But $|Y_n| \leq |X_1|/n + |X_2|/n +... |X_n|/n$ so $P(A_n)=0$. is it right?
If you assume the $X_i$ are iid, this is the Weak Law of Large Numbers. Without that assumption (or some slight generalizations of it), the statement is false.
The result does not hold. Robert has a fine counterexample. You are stating that $|Y_n| \leq |X_1|/n + ... + |X_n|/n$ and concluding that therefore, $P(A_n)=0$. How do you arrive at this conclusion? I suppose you evaluate $\int_{A_n} |Y_n| \, \mathrm{d}P \leq \frac{1}{n} \sum_{i=1}^n \int_{A_n} |X_i| \, \mathrm{d}P$, but how do you argue hereafter? Maybe useful: $EX_i=0$ does not imply $E|X_i|=0$. Actually $E|X_i|=0$ iff $X_i=0$ almost surely, in which case your statement is trivial.
Intuition for pullback operation In manifolds and complex geometry there is this thing called the pullback. Usually when I see it, its going backwards on maps that are going forwards. I've been told that it is just a composition of functions. I just need help understanding this concept.
Sometimes an example is better than a thousand words. Let's say you have these forms on $\mathbb{R}^{4}$: $$\omega^{1}= y\mbox{d}x-x\mbox{d}y-t\mbox{d}z+z\mbox{d}t,$$ $$\omega^{2}= t\mbox{d}x-z\mbox{d}y+y\mbox{d}z-x\mbox{d}t,$$ $$\omega^{3}= -z\mbox{d}x-t\mbox{d}y+x\mbox{d}z+y\mbox{d}t,$$ $$\omega^{4}= x\mbox{d}x+y\mbox{d}y+z\mbox{d}z+t\mbox{d}t.$$ And you want to pull them back into $S^3$. Then you need an application from $S^3$ into $\mathbb{R}^{4}$. Let's take this one $$F\left(\psi,\,\theta,\,\phi\right)=\left(\sin\psi\sin\theta\cos\phi,\sin\psi\sin\theta\sin\phi,\sin\psi\cos\theta,\cos\psi\right).$$ This application sends a point of coordinates $\left(\psi,\,\theta,\,\phi\right)$ on the sphere to a point of coordinates $(x,y,z,w)$ on $\mathbb{R}^{4}$ where of course $$x=\sin\psi\sin\theta\cos\phi,$$ $$y=\sin\psi\sin\theta\sin\phi,$$ $$z=\sin\psi\cos\theta,$$ $$w=\cos\psi.$$ Now to do the pullback you have by definition $$F^{*}(\omega^{i})=\omega^{i}(F_{*})$$ Then what you have to do is just differentiate $$dx = \cos\psi\sin\theta\cos\phi d\psi+\sin\psi\cos\theta\cos\phi d\theta-\sin\psi\sin\theta\sin\phi d\phi$$ $$dy = \cos\psi\sin\theta\sin\phi d\psi+\sin\psi\cos\theta\sin\phi d\theta+\sin\psi\sin\theta\cos\phi d\phi$$ $$dz = \cos\psi\cos\theta d\psi-\sin\psi\sin\theta d\theta$$ $$dw = -\sin\psi d\psi.$$ Then you substitute inside the original formula (substituiting $dx,dy,dz,dw$ along with $x,y,z,w$ of course and you obtain the desired forms pulled back $$\omega^{1}= \cos\theta\mbox{d}\psi-\sin\psi\cos\psi\sin\theta\mbox{d}\theta-\sin^{2}\psi\sin^{2}\theta\mbox{d}\phi,$$ $$ \omega^{2}= \sin\theta\cos\phi\mbox{d}\psi-\sin\psi\left(\sin\psi\sin\phi-\cos\psi\cos\theta\cos\phi\right)\mbox{d}\theta-\sin\psi\sin\theta\left(\cos\psi\sin\phi+\sin\psi\cos\theta\cos\phi\right)\mbox{d}\phi,$$ $$ \omega^{3}= -\sin\theta\sin\phi\mbox{d}\psi-\sin\psi\left(\sin\psi\cos\phi+\cos\psi\cos\theta\sin\phi\right)\mbox{d}\theta -\sin\psi\sin\theta\left(\cos\psi\cos\phi-\sin\psi\cos\theta\sin\phi\right)\mbox{d}\phi,$$ $$ \omega^{4}= 0.$$ You notice that the last one is 0 because the form was variating only along the radius of the sphere. Anyhow you cannot obtain more than 3 indipendent form on $S^3$ and the first three are indipendent. I think this example is complicated enough to let you practice, but simple enough to le you grasps what's going on here (which is effectively just the formalization of a change of variables).
The general definition of a pullback: Given two morphisms $f:X\to Z$ and $g:Y\to Z$, then a pullback of $f$ and $g$ (if it exist) is two morphisms $p_X:P\to X$ and $p_Y:P\to Y$ $\require{AMScd}$ \begin{CD} P @>p_Y>> Y\\ @V p_X V V= @VV g V\\ X @>>f> Z \end{CD} such that given morphisms $p'_X:P'\to X$ and $p'_Y:P'\to Y$ with \begin{CD} P' @>p'_Y>> Y\\ @V p'_X V V= @VV g V\\ X @>>f> Z \end{CD} there exist a unique morphism $\varphi:P'\to P$ such that $p'_X=p_X\varphi$ and $p'_Y=p_Y\varphi$. I suspect that the morphism $\varphi$ sometimes is called pullback, but in category theory it's the two morphisms $p_X:P\to X$ and $p_Y:P\to Y$. See also: Universal Definition for Pullback
About amicable numbers My question might seem wrong but I will explain it now: We will define $\phi(a)$ as the sum of divisors of $a$ except $a$. A pair $(a,b)$ is an amicable pair $\iff ((\phi(a) = b) \land (\phi(b) = a)) \land (a \neq b)$ Obviously if $(a,b)$ is an amicable pair $(b,a)$ is also an amicable pair. So, we will say $a$ is an amicable number $\iff$ There exist an $b$, such that $(a,b)$ is an amicable pair and $b < a$. Also we will say $a$ is a perfect number $\iff$ $\phi(a) = a$. So my question is if we pick an arbitrary number, is it more likely an amicable number or perfect number? This question might be wrong because they might be finite but my question is simple: "Are there more amicable numbers than perfect numbers?" I check with computer and believed that it is more likely an amicable number. Let's formalize this as follows: Let $P(n)$ be the number of perfect numbers $\leq n$, and let $A(n)$ be the number of amicable numbers less than $n$. Does $\lim_{n\rightarrow\infty} \frac{P(n)}{A(n)}$ exist, and if so what is its value?
I got the limits that I'm basing this on mainly from Wikipedia and WolframAlpha, so feel free to correct if something's wrong. According to Wikipedia, $P(n)<c\sqrt n$, where c is a constant. I assume that this is a rough asymptotic limit, correct me if I'm wrong. It also says that $A(n)<ne^{-{\ln(n)}^{1/3}}$, so $\frac {P(n)}{A(n)}\sim\frac {c\sqrt n}{ne^{-{\ln(n)}^{1/3}}}$. The limit is easier to evaluate after taking the logarithm, so we have $\ln ({\frac {P(n)}{A(n)}})\sim\ln(n)^{1/3}-\frac{\ln(n)}2+\ln(c)$, so the limit is $-\infty$, since the $-\ln(n)\over 2$ dominates the function as n approaches $\infty$. To convert this back into its original form, we take $e^{-\infty}=0$. Therefore $\lim_{n\to \infty} \frac{P(n)}{A(n)}=0.$
I got the limits that I'm basing this on mainly from Wikipedia and WolframAlpha, so feel free to correct if something's wrong. According to Wikipedia, $P(n)<c\sqrt n$, where c is a constant. I assume that this is a rough asymptotic limit, correct me if I'm wrong. It also says that $A(n)<ne^{-{\ln(n)}^{1/3}}$, so $\frac {P(n)}{A(n)}\sim\frac {c\sqrt n}{ne^{-{\ln(n)}^{1/3}}}$. The limit is easier to evaluate after taking the logarithm, so we have $\ln ({\frac {P(n)}{A(n)}})\sim\ln(n)^{1/3}-\frac{\ln(n)}2+\ln(c)$, so the limit is $-\infty$, since the $-\ln(n)\over 2$ dominates the function as n approaches $\infty$. To convert this back into its original form, we take $e^{-\infty}=0$. Therefore $\lim_{n\to \infty} \frac{P(n)}{A(n)}=0.$
Ambiguous conditions for $x$ in logarithm? I have just realised that there is a little ambiguity in defining conditions for $x$ in logarithm. Let me illustrate it on a simple example: $\log{x}$ is valid for $x>0$ , $2\log{x}$ is also valid only for $x>0$ , but $\log{x^2}$ is valid for both $x>0$ and $x<0$ . How is that possible when $2\log{x} = \log{x^2}$ ?
Note that logarithm identities are true only when both sides of the equation are defined, so $2\log x = \log x^2$ for $x > 0$, and not when $x < 0$. This thinking could lead to things like $-1 = \sqrt{-1} \cdot \sqrt{-1} = \sqrt{-1 \cdot -1} = \sqrt{1} = 1$, which is obviously false. The problem here is that $\sqrt{ab}=\sqrt{a}\cdot\sqrt{b}$ only when $a,b>0$.
As well known language do not follow the rules of logics. "If you behave well you will have an ice cream" according to logic does not imply that if you behave badly you will not have the ice cream, but any child knows that in language this is precisely what it means. One could argue that language reflect the way we reason, logics the way we should reason. No students, even the ones in math would not understand what you mean when you say $\log(x)$ is defined for positive $x$ and then $\log x^2 =2 \log x$.
Does a 15-puzzle always have a solution For those that are not familiar with (this type of) sliding puzzles, basically you have a number of tiles on a board equal to (n * m) - 1 (possibly more holes if you want). The goal is to re-arrange the tiles in such a way that solves the puzzle. The puzzle could be anything, from number games to images. While writing a small app for this, I found that if I were to initialize the puzzle by randomly shuffle all of the pieces, I could end up in a situation where there is no solution if my puzzle was 2x2. So the problem I have is: given a sliding puzzle with n-by-m dimensions, is there always a solution if there are a sufficient number of tiles (eg: a 3x3 board)? How would I even begin to prove this, or simply convince myself that it is the case? If it is possible that a random shuffle could result in no-solution, then I'd have to figure out how to verify that there exists a solution, which is a completely different beast.
Martin Gardner had a very good writeup on the 14-15 puzzle in one of his Mathematical Games books. Sam Loyd invented the puzzle. He periodically posted rewards for solutions to certain starting configurations. None of those rewards were claimed. Much analysis was expended, and it was finally determined, through a parity argument (as mentioned in the comment above), that half of the possible starting configurations were unsolvable. Interestingly, ALL of Loyd's reward configurations were unsolvable. SO: No, every possible configuration is not solvable. If you START with a solved puzzle, and apply only legal transformations (moves) to it, you always wind up at a solvable configuration. For the GENERAL nxm question, you'd probably have to expand the parity argument.
Here, we consider the following problem: For a given position on the board to say whether there is a sequence of moves leading to a decision or not. Let there be given a position on the board: +------------------+. | 11 | 15 | 2 | 13 |. |----|----|---|----|. | 14 | 1 | 8 | 3 |. |----|----|---|----|. | 7 | 6 | 0 | 10 |. |----|----|---|----|. | 4 | 12 | 9 | 5 |. +------------------+. wherein one of the elements is zero and indicates an empty cell. Consider the numbers in the matrix serially (rowwise): 11 12 2 13 14 1 8 3 7 6 0 10 4 12 9 5 Denote N - the number of inversions in the permutation (i.e. the number of such elements a[i] and a[j] that i < j but a[i] > a[j]). Next, let K- line number in which there is an empty element (i.e. in our notation K = 3). Then, a solution exists if and only if N + K is even.
Is it true that the only matrix that is similar to a scalar matrix is itself A square matrix $A$ is said to be similar to a square matrix $B$, of the same order, if there is an invertible matrix $P$ such that $A = PBP^{-1}$. Is this true? The only matrix that is similar to a scalar matrix is itself. I.e., if $C$ is a scalar matrix and $T = P CP^{−1}$, then $T = C$. If yes, how do I prove it?
If $C$ is a scalar matrix, then $C = \mu I_{d}$, then $T = PCP^{-1} = P(\mu I_{d})P^{-1} = \mu I_{d} (P P^{-1}) = \mu I_{d}$ Note : This proof relies on a simple detail and funny exercise, but that should not be underestimated; In the third equality I used that the center of matrices are only the scalar ones, i.e $Z(M_{n}(\mathbb{K})) = \mu I_{d}$
If $C$ is a scalar matrix, then $C = \mu I_{d}$, then $T = PCP^{-1} = P(\mu I_{d})P^{-1} = \mu I_{d} (P P^{-1}) = \mu I_{d}$ Note : This proof relies on a simple detail and funny exercise, but that should not be underestimated; In the third equality I used that the center of matrices are only the scalar ones, i.e $Z(M_{n}(\mathbb{K})) = \mu I_{d}$
is the set of integers as big as P(prime numbers)? I have a function $f:\mathbb{N}\to P(\mathbb{P})$ where $f(x) = \{prime\ factors\ of\ x\}$ $f$ is not $1:1$ (easy) I proved that $f$ is on $P(\mathbb{P})$ which is probably wrong since it means $|\mathbb{N}|\geq |P(\mathbb{P})|$ but $|\mathbb{N}|=|\mathbb{P}|$. Every subset of primes has a source in $\mathbb{N}$ (the multiplication of all the primes) What's wrong with this?
Every finite set of primes is in the range of $f$. However, what about any infinite set of primes? Say, the set of all primes, itself? No number has infinitely many prime factors, so no infinite set of primes is in the range of $f$. Indeed, the set of finite sets of primes is countable, while the set of infinite sets of primes is uncountable - "most" sets of primes are infinite!
As someone pointed out in the comments, $\mathbb{N}$ has the same cardinality as $\mathbb{P}$ because they are both infinite subsets of the countably infinite set $\mathbb{Z}$, hence they are both countably infinite sets. If you want to see this explicitly, just define $f : \mathbb{N} \to \mathbb{P}$ letting $f(n)$ be the $n$th prime, i.e. $f(1) = 2, f(2) = 3, f(3) = 5,\ldots$.
If N is a quadratic residue modulo p for all primes p<N, is N a perfect square? It is known that $N$ is a perfect square if and only if $N$ is a quadratic residue for every prime $p$. This gives a good probabilistic algorithm for testing if a randomly chosen positive integer is a perfect square - simply compute it's Legendre Symbol for a sufficiently large set of randomly chosen primes. A single result of $-1$ definitively tells you that $N$ is not a perfect square, while repeated results of only $+1$ or $0$ increase the probability that $N$ is a perfect square. This probabilistic test could be much quicker with lower probability of a false positive if we could check only primes less than the integer $N$. The proofs I've seen that $N$ is a perfect square IFF $N$ is a QR mod p for all primes $p$ cannot be used to prove the result using only the finitely many primes less than $N$. Has anyone seen or know of a proof of this more specific result, or alternatively, know of a counter-example? I've burned more than a few CPU cycles on Mathematica looking for a counterexample and have not found one yet.
No. For example, $\;6\;$ is a quadratic residue mod $\;2,3\;\text{ and }\,5\;$ .
A related problem: It is true that if $n$ is a quadratic residue modulo $p$ for all primes $p$ then $n$ is a perfect square.
Defining addition in second order logic (before saying it's duplicate, read whole question) I was told by someone that we can define addition and multiplication purely in terms of successor function, provided that we work in second order arithmetic. When trying to find a way in which it's defined, I found this M.SE question, and in the first answer a definition is given. I found it, however, quite unsatisfactory, because this definition uses quantification over functional symbols (which can be rewritten to use binary predicate quantification), and I expected a definition which would only require quantification over subsets of $\Bbb N$ (equivalently unary predicates). I expect something similar to answer to this question. My question is: how to define addition in second order arithmetic using quantification over sets only? Is that even possible? Thanks in advance.
As it says in the answer to Can equinumerosity by defined in monadic second-order logic?, Büchi showed that the monadic second order of theory of the successor function is decidable. This implies that you cannot define addition in this theory, because if you could, then you could also define multiplication (as shown in the answer to How to define multiplication in addition terms in monadic second order logic?) and then the theory would be at least as strong as PA, which is not decidable. I can't find the original work by Büchi online, but this paper by Rabinovich and Thomas has the relevant references.
Not sure what you mean by "using quantification over sets only," but this is how you define addition and multiplication in terms of the successor function. Addition: $\forall x \in N: x+0=x$ $\forall x,y\in N: x+S(y) = S(x+y)$ Multiplication: $\forall x \in N: x\cdot0=0$ $\forall x,y\in N: x\cdot S(y) = x\cdot y + x$ You can construct these functions using Peano's axioms and the axioms of set theory. For the addition function, we would define the set of ordered triples $A$ such that $\forall x,y,z:[(x,y,z)\in A \iff (x,y,z)\in N^3 $ $\land \forall B\subset N^3:[\forall a\in N: (a,0,a)\in B \land \forall a,b,c: [ (a,b,c)\in B \implies (a,S(b),S(c))\in B]$ $ \implies (x,y,z)\in B]] $ Similarly, for the multplication function (having defined +), we would define the set of ordered triples $M$ such that $\forall x,y,z:[(x,y,z)\in M \iff (x,y,z)\in N^3 $ $\land \forall B\subset N^3:[\forall a\in N: (a,0,0)\in B \land \forall a,b,c: [ (a,b,c)\in B \implies (a,b+1,c+a\in B]$ $ \implies (x,y,z)\in B]] $
How to solve the differential equation $yy' =-x$? I just started learning about Differential Equations and there's this equation that I had problems with: $$yy' =-x$$ I know that it's a separable ODE and that I can integrate both sides. The right side is then going to be $-\frac{x^2}{2}+c$ but I'm not sure how to go about the left side. I know that the solution is $\frac{y^2}{2} = -\frac{x^2}{2}+c$ but why is the left side equal to that? Usually $\frac{y^2}{2}$ is the anti-derivative of $y$ and not $yy'$. What am I missing here ? Thanks.
$$y y' = -x$$ $$y \frac{dy}{dx} = -x$$ $$y dy = -x dx$$ This is what it means to separate it. Now only terms involving $y$ are on the left, and terms involving $x$ are on the right. So integrate... $$y^2/2 + C_1 = - x^2/2 + C_2$$ $$y^2 = -x^2 + C$$ $$y = \pm \sqrt{C-x^2}$$ As another user pointed out, you can also just recognize that $$(y^2)' = 2y y'$$
hint You can easily check that The antiderivative of $\sin(x)$ is not $\frac{\sin^2(x)}{2}$ But, you can be sure that the antiderivative of $\sin(x)\cos(x) $ is $\frac{\sin^2(x)}{2} $. do not forget to add a constant $ C$. $$\int x.1.dx =\frac{x^2}{2}$$ and by the same $$\int y(x).y'(x)dx=\frac{y^2(x)}{2}$$
Why is a circle not a convex set? Why is a circle not a convex set? My attempt: I was googling it, but I didn't find the correct answer, as I didn't understand the explanation why a circle is not a convex set as defined here: As shown in diagram 1, points A and B and points C and D lie inside the circle, thus I think the circle must be a convex set, as clearly shown in figure 1. Why isn't the circle a convex set?
They mean just the edge of the circle, without the interior. That is not a convex set.
Because the chord is not placed on the circle.
Products of adjugate matrices Let $S$ and $A$ be a symmetric and a skew-symmetric $n \times n$ matrix over $\mathbb{R}$, respectively. When calculating (numerically) the product $S^{-1} A S^{-1}$ I keep getting the factor $\det S$ in the denominator, while I would expect to get the square $$S^{-1} A S^{-1} = \frac{(\text{adj }S) A (\text{adj }S)}{(\det S)^2},$$ where $\text{adj }S$ is the adjugate of $S$. Is there a way to prove that the combination $(\text{adj }S) A (\text{adj }S)$ already contains a factor of $\det S$?
Assume that $\det(S)=0$. Then $adj(S)$ has rank $1$ and is symmetric; then $adj(S)=avv^T$ where $a\in \mathbb{R}$ and $v$ is a vector. Thus $adj(S)Aadj(S)=a^2v(v^TAv)v^T$. Since $A$ is skew-symmetric, $v^TAv=0$ and $adj(S)Aadj(S)=0$. We use the Darij's method; here, the condition is that $\det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix; if it is true, then $\det(S)$ is a factor of every entry of $adj(S)Aadj(S)$. EDIT 1. For the proof that $\det(S)$ is an irreducible polynomial when $S$ is a generic symmetric matrix, cf. https://mathoverflow.net/questions/50362/irreducibility-of-determinant-of-symmetric-matrix and we are done ! EDIT 2. @ darij grinberg , hi Darij, I read quickly your Theorem 1 (for $K$, a commutative ring with unity) and I think that your proof works; yet it is complicated! I think (as you wrote in your comment above) that it suffices to prove the result when $K$ is a field; yet I do not kwow how to write it rigorously... STEP 1. $K$ is a field. If $\det(S)=0$, then $adj(S)=vw^T$ and $adj(S).A.adj(S)=v(w^TAw)v^T=0$ (even if $char(K)=2$). Since $\det(.)$ is irreducible over $M_n(K)$, we conclude as above. STEP 2. Let $S=[s_{ij}],A=[a_{i,j}]$. We work in the ring of polynomials $\mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i&lt;j}]$ in the indeterminates $(s_{i,j}),(a_{i,j})$. This ring has no zero-divisors, is factorial and its characteristic is $0$ and even is integrally closed. Clearly the entries of $adj(S).A.adj(S)$ are in $\mathbb{Z}[(s_{i,j})_{i,j},(a_{i,j})_{i&lt;j}]$; moreover they formally have $\det(S)$ as a factor. Now, if $K$ is a commutative ring with unity, we must use an argument using a variant of Gauss lemma showing that the factor $\det(S)$ is preserved over $K$. What form of the lemma can be used and how to write it correctly ? I just see that the OP takes for himself the green chevron; we are our own best advocates
Because $S\mbox{adj}(S)=\mbox{adj}(S)S=\mbox{det}(S)I$, then $S$ is invertible iff $\mbox{det}(S)\ne 0$, and, in that case $$ S^{-1} = \frac{1}{\mbox{det}(S)}\mbox{adj}(S). $$ Therefore, $$ S^{-1}AS^{-1} = \frac{1}{\mbox{det}(S)^{2}}\mbox{adj}(S)\,A\,\mbox{adj}(S). $$
Prove that for all sets $A$ and $B$, $( A \cup B ) - B = A$ implies $A \cap B =\varnothing$. In the following proof we harness properties such as: For all set $A$, (i) $A = A - \varnothing$ (ii) $A \cap\varnothing=\varnothing$ Proof: Let $A$ and $B$ be arbitrary sets and suppose $( A \cup B ) - B = A$. Given (i) then $$( A \cup B ) - B = A - \varnothing.$$ So then, $( A \cup B ) = A$ and $B = \varnothing$. That being so, $$A \cap B = ( A \cup B ) \cap \varnothing = \varnothing$$ If we apply (ii) to the foregoing equivalence relation. Therefore $$A \cap B = \varnothing$$ Is this proof right? if it is I would like to see alternative ones.
Suppose that $x \in A \cap B$ existed. Then $x \in A$ but also $x \in A \cup B$ and $x \in B$ so that $x \notin (A \cup B) - B$. This contradicts the starting assumption that $A=(A\cup B)-B$, so $A \cap B = \emptyset$.
Suppose $(A\cup B) \setminus B = A$. Let $x \in A\cap B$. Then $x \in A$ and $x\in B$. Then $x \in B$ so $x\not \in M\setminus B$ for any set $M$ so $x\not \in (A\cup B) \setminus B$. So $x \not \in A$. But that's a contradiction. So if $(A\cup B)\setminus B =A$ then $x \in A\cap B$ is impossible. So $A\cap B=\emptyset$. Not this in no way means $B$ is empty; just $A$ and $B$ are dijoint. ..... Alternatively. If $(A\cup B)\setminus B = A$ then if $x \in A$ then $x\in (A\cup B) \setminus B$ so $x \not \in B$. So $x$ in both $A$ and $B$ is impossible so $A\cap B=\emptyset. Or.... If $(A\cup B)\setminus B = A$ and if $x \in B$ then $x\not \in (A\cup B)\setminus B = A$ so $x$ in both $B$ and $A$ is impossible. .... An alternative way would be to prove that $(M\cup N)\setminus N = M\setminus N$ always. I'll leave that proof to you. That means $(A\cup B) \setminus B= A\setminus B= A$. So if $x\in B$ the $x$ is removed for $A$ to get $A\setminus B$ so $x\not \in A$. Etc.
Found $\sin x$ partial limits How to find the sequences of $\sin n$ (n=natural number) sub-sequence limits? I know that it is a $[-1;1]$, but how to proof? Edit: Is it true, that sin(n), with all natural numbers have different value? How to proof? If that is true, it is posible found bijection between N->[-1;1]
Outline: Consider the points $P(n)=(\cos n,\sin n)$ as $n$ ranges over the positive integers. It is enough to show that this set of points is dense in the unit circle. Since $\pi$ is irrational, we have $P(m)\ne P(n)$ if $m\ne n$. It follows by the Pigeonhole Principle that given any $\epsilon \gt 0$, there exist $m_0$ and $n_0$, with $m_0\lt n_0$, such that $0\lt d(P(m_0),P(n_0))\lt \epsilon$. In particular, $0 \lt |\sin(m_0) -\sin(n_0)|\lt \epsilon$. Then for any point $P$ on the unit circle, there are infinitely many positive integers $k$ such that the distance of $P(m_0+k(n_0-m_0))$ from $P$ is less than $\epsilon$. In particular, for any $x$ there exist infinitely many $k$ such that $|\sin x -\sin(m_0+k(n_0-m_0))|\lt \epsilon$. Remark: The modified question asks whether $\sin m$ and $\sin n$ can be equal if $m\ne n$. Note that $\sin x=\sin y$ if and only if $y=x+2k\pi$ or $y=(2k+1)\pi -x$ for some integer $k$. Since $\pi$ is irrational, this cannot happen if $x=m$ and $y=n$, where $m$ and $n$ are distinct integers. In essence this fact was used in the proof outlined in the answer.
It's easy to prove that $\forall n,m\in\mathbb{N}, \sin n \neq \sin m$ if $n \neq m $. If $\sin{n}=\sin{m}$, then $n=m+2\pi k (k\in\mathbb{Z})$ by periodicity. So, $\pi=\dfrac{n-m}{2k}\in\mathbb{Q}$ But since the left side is clearly irrational number, this contradiction implies the above fact.
Tangent Lines- h(x)=f(g(x)) Assume $f$ and $g$ are differentiable on their domains with $h(x)=f(g(x))$. Suppose the equation of the line tangent to the graph of $g$ at the point $(4,70)$ is $y=3x-5$ and the equation of the line tangent to the graph of $f$ at $(7,9)$ is $y= -2x+23$ a. calculate $h(4)$ and $h'(4)$. b. Determine an equation of the line tangent to the graph of $h$ at the point of the graph where $x=4$. I'm lost on a, can anyone give me a hint on how to approach it?
We have insufficient information about $f(x)$ to solve part a. All we know about $f(x)$ is that it passes through $(7,9)$ and that it has derivative $f'(7)=-2$. This could be satisfied by $f(x)=-2x+23$, in which case $h(4)$ would be $-117$ but it could also be satisfied by $f(x)=x^2-16x+72$, in which case, $h(4)$ would be $3852$. Presumably the value of $70$ was a typo and should have been $7$, in which case $h(4)$ would have been: $4\underbrace{\rightarrow}_{g}7\underbrace{\rightarrow}_{f}\underline{9}$. For part b, it suffices to use the chain rule.
Hint : $h(x)=f(3x-5)$ Also $dh/dx=f'(g(x)) g'(x)$
Evaluating a sum $\sum_{k=1}^{n}\frac{1}{k}\binom{n}{k}$ How should I evaluate the sum: $\sum_{k=1}^{n}\frac{1}{k}\binom{n}{k}$ ? I have no relevant observations or partial results so far. Any kind of help or advice would be truly appreciated!
Use binom of Newton: $$(1+x)^n=\sum_{k=0}^n \binom nk x^k\stackrel{\text{integ.}}\implies\frac{(1+x)^{n+1}}{n+1}=\sum_{k=0}^n\binom nk\frac{x^{k+1}}{k+1}+C\;,\;\;C=\text{ constant}$$ Now susbtitute $\;x=0\;$ to find the constant, and then $\;x=1\;$ and do some little algebra.
Use binom of Newton: $$(1+x)^n=\sum_{k=0}^n \binom nk x^k\stackrel{\text{integ.}}\implies\frac{(1+x)^{n+1}}{n+1}=\sum_{k=0}^n\binom nk\frac{x^{k+1}}{k+1}+C\;,\;\;C=\text{ constant}$$ Now susbtitute $\;x=0\;$ to find the constant, and then $\;x=1\;$ and do some little algebra.
In the equation $y = ax^2 + bx + c$ of a parabola, what do "$a$", "$b$", "$c$" represent? I have trouble grasping some basic things about parabolas. (This should be easily found on Google, but for some reason I couldn't find an answer that helped me). I know one simple standard equation for a parabola: $$y = ax^2 + bx + c$$ My problem is: I'm not sure what the following letters represent: $a$, $b$, and $c$. Please try to explain to me what each of these letters represent in the equation, in a simple manner so I will understand, since I have very basic knowledge in math. Thank you
It would be worth your while to learn another standard form of the equation of a parabola, and you can complete the square, given $y = ax^2 + bx + c$, to obtain this form: $$4p(y - k) = (x-h)^2$$ The vertex of the parabola is given by $(h, k)$. $$h = \frac{-b}{2a};\quad k = \frac{4ac - b^2}{4a}$$ $$4p = \frac 1a$$
So, we have f(x) = ax^2 + bx + c, where a is distinct from 0. 'A' is related with growth. If 'a' is negative, function decrease, if 'a' is positive, function increase. 'C' is of course y-argument of point (0, f(x)) (let's imagine this as a place where function cross OY axis).
Euclidean norm second derivative I really need Your help. I need to prove that Euclidean norm is strictly convex. I know that a function is strictly convex if $f ''(x)&gt;0$. Can I use it for Euclidean norm and how? $||x||''=\frac{||x||^2-x^2}{||x||^3}$ Thank You!
The function $$ \lVert \mbox{ }\rVert:\mathbb{R}^n\to \mathbb{R},\, f(x)=\lVert x\rVert $$ is certainly convex. In fact, given $t\in [0,1]$ and $x,y\in \mathbb{R}^n$ we have: $$ \lVert tx+(1-t)y\rVert=\lVert tx+(1-t)y\rVert\le \lVert tx\rVert+\lVert (1-t)y\rVert=t\lVert x\rVert+(1-t)\lVert y\rVert. $$ But is it STRICTLY CONVEX? i.e. if $t\in (0,1)$ and $x,y \in \mathbb{R}^n$, with $x\ne y$, then $$ \lVert tx+(1-t)y\rVert&lt;t\Vert x\rVert+(1-t)\lVert y\rVert. $$ The answer is NO. In fact, if $t\in (0,1)$, and $y=sx$, with $0&lt;s \ne 1$, and $x\in \mathbb{R}^n$, we certainly have $x\ne y$. But \begin{eqnarray} \lVert tx+(1-t)y\rVert&amp;=&amp;\lVert tx+(1-t)sx\rVert=\lVert (t+(1-t)s)x\rVert=(t+(1-t)s)\lVert x\rVert\\ &amp;=&amp;t\lVert x\rVert+(1-t)\lVert sx\rVert=t\lVert x\rVert+(1-t)\lVert y\rVert \end{eqnarray} Hence the function $\lVert\mbox{ } \rVert$ is CONVEX but NOT STRICTLY CONVEX.
By the definition of convexity and the triangle inequality we have $$ ||ax+(1-a)y|| \leq |a| ||x||+ |1-a|||y||,\quad 0\leq a\leq1. $$
Product of two regular spaces Def: A space $X$ is regular if $\forall x \in X$, and for every closed set $C \subset F$ with $x \notin C$, there exists disjoint open sets $U$ and $V$ s.t $x \in U$ and $C \subset V$. Problem: A subspace of a regular space is regular; $X \times Y$ is regular iff each of $X$ and $Y$ is regular. So, the first part I think I figure out. A Subspace of a regular space is regular. Proof: Let $Y$ be a subspace of a regular space $X$ and let $x \in Y$ and $F$ be a closed set in $Y$ not containing $x$. Since $Cl_{X}(F) \cap Y = F$ then $x \notin Cl_{X}(F)$. Then because $X$ is regular $\exists U,V$ disjoint open set in $X$ such that $ x \in U$ and $Cl_{X}(F) \subset V$. Then $U \cap Y$ and $V \cap Y$ are open sets in Y such that $x \in U \cap Y$ and $F \subset V \cap Y$. Therefore $Y$ is a regular subspace of $X$. The second half of the question is where I'm having trouble. I feel like it should follow straight forward from the definition of a regular space and work out component wise on $X \times Y$, but at the same time I know how these questions go and that the question wouldn't be phrased how it is if I didn't need to use the fact that a subspace of a regular space is regular. Also, talking to a friend made me feel this to be even more true, but I can't seem to get it or see why it is really needed. Here's my attempt though. $X \times Y$ is regular iff each of $X$ and $Y$ is regular. Proof: $\Rightarrow$) Assume $X \times Y$ is regular. Then $\forall (x,y) \in X \times Y$ and for every closed $(F_{1}, F_{2}) \subset X \times Y$ with $(x,y) \notin (F_{1}, F_{2})$ there exists disjoint open sets $(U_{1}, U_{2}), (V_{1}, V_{2}) $in $ X \times Y$ such that $(x,y) \in (U_{1}, U_{2})$ and $(V_{1}, V_{2}) \subset (F_{1}, F_{2})$. Then $x \in U_{1} \subset X$, $y \in U_{2} \subset Y$, and $V_{1} \subset F_{1}$ in $X$ and $V_{2} \subset F_{2}$ in $Y$. Therefore, $X$ is regular and $Y$ is regular. $\Leftarrow$) I'm not really sure here, and the proof I have now for this part is fairly analogous to the first implication. As far as using the subspace result for proving the second half of the question the only thing I can really think to do is to consider, the subspaces $X$ and $Y$ in $X \times Y$ but I wasn't able to quite get the proof when messing around with that and my notation is probably a little off since I have not done much with products in topology, so I'll spare everyone and not post that. This question seemed like it would be fairly easy on first glance I'm not sure why it's getting me so confused. Anyways, thank you very much for any help/insight you can provide.
Here are some suggestions; I’ve left a lot of details for you to complete. The product question is much easier if you first prove this little Proposition: A space $X$ is regular if and only if for each point $x\in X$ and each open set $U$ containing $x$ there is an open set $V$ such that $x\in V\subseteq\operatorname{cl}_X V\subseteq U$. For the proof of $(\Leftarrow)$, suppose that $\langle x,y\rangle\in X\times Y$ and $U$ is an open set in $X\times Y$ such that $\langle x,y\rangle\in U$. Then by the definition of the product topology there are open sets $V\subseteq X$ and $W\subseteq Y$ such that $\langle x,y\rangle\in V\times W\subseteq U$. Clearly $x\in V$ and $y\in W$, so you can use the regularity of $X$ and $Y$ and the proposition to get open sets $G\subseteq X$ and $H\subseteq Y$ such that $x\in G\subseteq\operatorname{cl}_X G\subseteq V$ and $y\in H\subseteq\operatorname{cl}_Y H\subseteq W$. Now consider the open set $G\times H$, which clearly contains $\langle x,y\rangle$; what is its closure in $X\times Y$? For the proof of $(\Rightarrow)$ you need to assume that $X\times Y$ is regular and then prove that $X$ and $Y$ are regular. To show directly that $X$ is regular, you must start start with a point $x\in X$, not a point in $X\times Y$. If you’re going for a direct proof, and you use the proposition, you’ll also start with an open set $U$ containing $x$. Let $y$ be any point of $Y$, and consider the open neighborhood $U\times Y$ of $\langle x,y\rangle$. Get an open set $V\subset X\times Y$ such that $$\langle x,y\rangle\in V\subseteq\operatorname{cl}_{X\times Y}V\subseteq U\times Y\;,$$ and find a basic open set in the product topology (i.e., an open ‘box’, like $V\times W$ in the proof of $(\Leftarrow)$) containing $\langle x,y\rangle$ and contained in $V$. You should have little trouble using that ‘box’ to get an open set in $X$ containing $x$ whose closure is contained in $U$. However, for this direction you can, as you suspected, use the first part of the problem. Pick any $y\in Y$; then $X\times \{y\}$ is a subspace of the regular space $X\times Y$, so it’s regular, and it’s also homeomorphic to $X$, so $X$ is regular as well. You can handle $Y$ similarly.
($\Leftarrow$) Suppose $X\times Y$ is regular. Pick points $x_0\in X$ and $y_0\in Y$ then the subspaces $X\times\{y_0\}$ and $\{x_0\}\times Y$ are regular (by your first result). Now note that these subspaces are homeomorphic to $X$ and $Y$ respectively hence $X$ and $Y$ are regular.
How do we prove that something is unprovable? I have read somewhere there are some theorems that are shown to be "unprovable". It was a while ago and I don't remember the details, and I suspect that this question might be the result of a total misunderstanding. By the way, I assume that unprovable theorem does exist. Please correct me if I am wrong and skip reading the rest. As far as I know, the mathematical statements are categorized into: undefined concepts, definitions, axioms, conjectures, lemmas and theorems. There might be some other types that I am not aware of as an amateur math learner. In this categorization, an axiom is something that cannot be built upon other things and it is too obvious to be proved (is it?). So axioms are unprovable. A theorem or lemma is actually a conjecture that has been proved. So "a theorem that cannot be proved" sounds like a paradox. I know that there are some statements that cannot be proved simply because they are wrong. I am not addressing them because they are not theorems. So what does it mean that a theorem is unprovable? Does it mean that it cannot be proved by current mathematical tools and it may be proved in the future by more advanced tools that are not discovered yet? So why don't we call it a conjecture? If it cannot be proved at all, then it is better to call it an axiom. Another question is, how can we be sure that a theorem cannot be proved? I am assuming the description might be some high level logic that is way above my understanding. So I would appreciate if you put it into simple words. Edit- Thanks to a comment by @user21820 I just read two other interesting posts, this and this that are relevant to this question. I recommend everyone to take a look at them as well.
When we say that a statement is 'unprovable', we mean that it is unprovable from the axioms of a particular theory. Here's a nice concrete example. Euclid's Elements, the prototypical example of axiomatic mathematics, begins by stating the following five axioms: Any two points can be joined by a straight line Any finite straight line segment can be extended to form an infinite straight line. For any point $P$ and choice of radius $r$ we can form a circle centred at $P$ of radius $r$ All right angles are equal to one another. [The parallel postulate:] If $L$ is a straight line and $P$ is a point not on the line $L$ then there is at most one line $L'$ that passes through $P$ and is parallel to $L$. Euclid proceeds to derive much of classical plane geometry from these five axioms. This is an important point. After these axioms have been stated, Euclid makes no further appeal to our natural intuition for the concepts of 'line', 'point' and 'angle', but only gives proofs that can be deduced from the five axioms alone. It is conceivable that you could come up with your own theory with 'points' and 'lines' that do not resemble points and lines at all. But if you could show that your 'points' and 'lines' obey the five axioms of Euclid, then you could interpret all of his theorems in your new theory. In the two thousand years following the publication of the Elements, one major question that arose was: do we need the fifth axiom? The fifth axiom - known as the parallel postulate - seems less intuitively obvious than the other four: if we could find a way of deducing the fifth axiom from the first four then it would become superfluous and we could leave it out. Mathematicians tried for millennia to find a way of deducing the parallel postulate from the first four axioms (and I'm sure there are cranks who are still trying to do so now), but were unable to. Gradually, they started to get the feeling that it might be impossible to prove the parallel postulate from the first four axioms. But how do you prove that something is unprovable? The right approach was found independently by Lobachevsky and Bolyai (and possibly Gauss) in the nineteenth century. They took the first four axioms and replaced the fifth with the following: [Hyperbolic parallel postulate:] If $L$ is a straight line and $P$ is a point not on the line $L$ then there are at least two lines that pass through $P$ and are parallel to $L$. This axiom is clearly incompatible with the original parallel postulate. The remarkable thing is that there is a geometrical theory in which the first four axioms and the modified parallel postulate are true. The theory is called hyperbolic geometry and it deals with points and lines inscribed on the surface of a hyperboloid: In the bottom right of the image above, you can see a pair of hyperbolic parallel lines. Notice that they diverge from one another. The first four axioms hold (and you can check this), but now if $L$ is a line and $P$ is a point not on $L$ then there are infinitely many lines parallel to $L$ passing through $P$. So the original parallel postulate does not hold. This now allows us to prove very quickly that it is impossible to prove the parallel postulate from the other four axioms: indeed, suppose there were such a proof. Since the first four axioms are true in hyperbolic geometry, our proof would induce a proof of the parallel postulate in the setting of hyperbolic geometry. But the parallel postulate is not true in hyperbolic geometry, so this is absurd. This is a major method for showing that statements are unprovable in various theories. Indeed, a theorem of Gödel (Gödel's completeness theorem) tells us that if a statement $s$ in the language of some axiomatic theory $\mathbb T$ is unprovable then there is always some structure that satisfies the axioms of $\mathbb T$ in which $s$ is false. So showing that $s$ is unprovable often amounts to finding such a structure. It is also possible to show that things are unprovable using a direct combinatorial argument on the axioms and deduction rules you are allowed in your logic. I won't go into that here. You're probably interested in things like Gödel's incompleteness theorem, that say that there are statements that are unprovable in a particular theory called ZFC set theory, which is often used as the foundation of all mathematics (note: there is in fact plenty of mathematics that cannot be expressed in ZFC, so all isn't really correct here). This situation is not at all different from the geometrical example I gave above: If a particular statement is neither provable nor disprovable from the axioms of all mathematics it means that there are two structures out there, both of which interpret the axioms of all mathematics, in one of which the statement is true and in the other of which the statement is false. Sometimes we have explicit examples: one important problem at the turn of the century was the Continuum Hypothesis. The problem was solved in two steps: Gödel gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was true. Later, Cohen gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was false. Between them, these results show that the Continuum Hypothesis is in fact neither provable nor disprovable in ZFC set theory.
The most common way (AFAIK) to establish $A \not \vdash B$ is to find some statement $X$ such that: $A,~X \vdash \lnot B$ $A \land X$ is consistent Then you can conclude $A \not \vdash B$ because constructively if $A \vdash B$ then $A ,~ X \vdash B$ then $A ,~ X \vdash B \land \lnot B$ which contradicts the assumption that $A \land X$ are consistent. For example, if you wanted to show that from "$n$ is divisible by $3$" you cannot prove "$n$ is divisible by $6$" you do the obvious thing of pointing out the $X$ of $n = 9$. Since $3|n , ~ n = 9 \vdash \lnot 6|n$ $(3|n) \land (n = 9)$ is consistent So $3|n \not \vdash 6|n$. This simple approach can get very complex, mainly because establishing $A \land X$ as being consistent can be extremely demanding, usually an informal model theoretic approach is used for this. It is also a well known way to establish undecidability, if you can find $X_1$ and $X_2$ such that: $A, ~ X_1 \vdash B$ $A, ~ X_2 \vdash \lnot B$ $A \land X_1$ is consistent $A \land X_2$ is consistent Then for the same reason $A \not \vdash B$ and $A \not \vdash \lnot B$, so $B$ is independent of $A$.
Problem about complex integration The question is to find $\int \frac{z^2-z+1}{z-1}dz$ over $|z|=1$. My solution is: Using Cauchy's integral formula we have: $$f(1) = \frac{1}{2\pi i}\int \frac{f(z)}{z-1}dz,$$ but $f(1) = 1$. Therefore: $$\int \frac{f(z)}{z-1}dz = 2\pi i,$$ where $f(z) = z^2-z+1$. Is my solution correct? If not then please help me to rectify it. Thanks a lot.
Your solution is incorrect because you are using Cauchy's integral formula incorrectly. Using Cauchy's formula we can evaluate the function at any point inside the domain bounded by the contour of integration and in this case, the point lies on the contour. The correct approach is to find the Principal Value of the integral as pointed out by @MhenniBenghorbal. Let $C$ denote the unit circle. Note that $\int_C \dfrac{z^2-z+1}{z-1}\, \mathrm{d} z = \int_c z \, \mathrm{d} z + \int_C \dfrac{\mathrm{d} z}{z-1} = \int_C \dfrac{\mathrm{d} z}{z-1}$ where the first integral is zero by Cauchy's theorem. Now we find the Principal Value of the second integral. $ \mathrm{PV} \int_C \dfrac{\mathrm{d} z}{z-1} = \lim_{\epsilon \rightarrow 0} \left[\int_{C_1} \dfrac{dz}{z-1} + \int_{C_2} \dfrac{dz}{z-1}\right]$. Here C is approximated by the boundary of the region given by $\{|z|&lt;1 \}\backslash \{|z-1|&lt;\epsilon \}$ and $C_1$ is the bigger arc while $C_2$ is the arc of radius $\epsilon$. Hence $C_1: z=e^{i\theta}; \epsilon &lt; \theta &lt; 2\pi-\epsilon$ in the counter-clockwise direction and $C_2: z = 1+\epsilon e^{i\phi}; \pi/2 &lt; \phi &lt; 3\pi/2$ in the clockwise direction. Therefore $ \mathrm{PV} \int_C \dfrac{\mathrm{d} z}{z-1} =2\pi i \times \left( \text{Residue of }1/(z-1) \text{ inside the region bounded by } C_1 \cup C_2 \right) = 0$. $0 = \lim_{\epsilon \rightarrow 0} \left[\int_{\epsilon}^{2\pi-\epsilon} \dfrac{ie^{i\theta}d\theta}{e^{i\theta}-1} - \int_{\pi/2}^{3\pi/2}\dfrac{i \epsilon e^{i\phi}d\phi}{\epsilon e^{i\phi}} \right]$, $\lim_{\epsilon \rightarrow 0} \int_{\epsilon}^{2\pi-\epsilon} \dfrac{ie^{i\theta}d\theta}{e^{i\theta}-1} = i\pi $; that is $\int_C \dfrac{dz}{z-1} = i\pi$
If we just take $z=\cos x+i\sin x$ and solve the question and instead of $dz$ we make i Is that "i" an "integral" which was cut off or an $i$, imaginary unit?
Prove or disprove that in an 8-element subsets of $\{1,2…,30\}$ there must exist two $4$-element subsets that sum to the same number. How can I show that for any set of $8$ distinct positive integers not exceeding $30$, there must exist two distinct $4$-elements subsets that same up to the same number? I tried using pigeon hole principle, but i still don't get it. There are $$\binom {8}4=70$$ four-elements subsets of an $8$-element set. The least possible sum is $1+2+3+4=10$ and the greatest possible sum is $27+28+29+30=114$. Hence, there are $105$ sums. I have no idea how to continue because the number of possible integer sums is greater than the number of four-element subsets. The $4$-element subsets are not necessarily non-overlapping. Edit: For example, from $X=\{1,3,9,11,15,20,24,29\}$ , we can choose two different subsets $\{1,3,15,24\}$ and $\{3,9,11,20\}$ because they both sum up to $43$.
Let the elements of $X$ be $a_1&lt;a_2&lt;...&lt;a_8$ and denote the seven successive differences by $d_i=a_{i+1}-a_i.$ Consider the subsets of size $4$ which contain either $2$ or $3$ elements of $\{a_5,a_6,a_7,a_8\}$. There are $$\begin{pmatrix}4\\1\\\end{pmatrix}\begin{pmatrix}4\\3\\\end{pmatrix}+\begin{pmatrix}4\\2\\\end{pmatrix}\begin{pmatrix}4\\2\\\end{pmatrix}=52$$ of these subsets and the possible sums of their elements range from $a_1+a_2+a_5+a_6$ to $a_4+a_6+a_7+a_8$. So, by the pigeon-hole principle, we are finished unless $$a_4+a_6+a_7+a_8-(a_1+a_2+a_5+a_6)+1\ge 52$$ $$\text {i.e.} 2(a_8-a_1)\ge51+d_1+d_4+d_7.$$ Since $a_8-a_1\le 29$ we must have $d_1+d_4+d_7\le7$. Using the observations given below, $d_1,d_4,d_7$ are all different and no two can add to the third and so $\{d_1,d_4,d_7\}=\{1,2,4\}$ and $\{a_1,a_{8}\}=\{1,30\}.$ Some observations about the $d_i$. (1) Any two non-adjacent differences are unequal. (2) Given three non-adjacent differences, none is the sum of the other two. (3) Given two adjacent differences, the sum of these differences can replace one of the differences in observations (1) and (2). (We still require the 'combined difference' to be non-adjacent to the other differences involved.) The proofs of these are all elementary and of the same form. As an example, suppose we have $d_2+d_3=d_5+d_7$, which is a combination of (2) and (3). Then $$a_4-a_2=a_6-a_5+a_8-a_7.$$ The sets $\{a_4,a_5,a_7\}$ and $\{a_2,a_6,a_8\}$ then have the same sum and $a_1$, say, can be added to each. To return to the main proof where we know that the differences $\{d_1,d_4,d_7\}=\{1,2,4\}$. Let $d$ be a difference adjacent to whichever of $\{d_1,d_4,d_7\}$ is $1$. Then, by the observations, $\{d,d+1\}\cap\{2,4,6\}$ is empty. So $d\ge7$. Let $d$ be a difference adjacent to whichever of $\{d_1,d_4,d_7\}$ is $2$. Then, by the observations, $\{d,d+2\}\cap\{1,3,4,5\}$ is empty. So $d\ge6$. Let $d$ be a difference adjacent to whichever of $\{d_1,d_4,d_7\}$ is $4$. Then, again by the observations, $\{d\}\cap\{1,2,3\}$ is empty. So $d\ge4$. The sum of the differences (which is $29$) is now at least $(1+2+4)+(7+6+4)+d$, where $d$ is the 'other' difference adjacent to $d_4$. Therefore $d_4=4$ and the two differences adjacent to it (which cannot be equal) are $4$ and $5$. The differences adjacent to the differences of $1$ and $2$ are thus forced to be $7$ and $6$, respectively. Then $a_1+a_8=a_3+a_5$ and we are finished.
The statement is false. Take for example a subset with 7 odd numbers and 1 even number. Then we divide this subset into two 4-element subsets. One of them will have 4 odd numbers whose sum will be an even number while the other will have 1 even number and 3 odd numbers which will add up to an odd number. Example: 1,3,5,7,9,11,13,14 Total sum is 63 and if we were to form 2 subsets, sum in each would not be a whole number.
What is the smallest $n$ such that the condition holds? What is the smallest $n$ such that for all $a,b$ with $12 \mid a+b$, it is the case that if $n\mid ab$, then $12 \mid a$ and $12 \mid b$? I think I see how it works if $n = 144$, but I don't know how to prove that it is the smallest (or that there is another value of $n$ which is).
Let's see. Apparently, $n$ is divisible by 2 and 3, otherwise it couldn't enforce divisibility by 12. Now, the product $ab$ is divisible by 3, which means that at least one of the numbers $a$ and $b$ is divisible by 3, and so is their sum, hence so is the other number. By similar reasoning, both are divisible by 2. So $a=6x,\;b=6y$. Now the question changes to: given that $x+y$ is even, what is the condition on $n$, such that ${n\over36}\mid xy$, that would make both $x$ and $y$ even? The answer is obvious: the product must be even as well, so $n=72$ will suffice.
If I get you right, we have to find $n,a,b$ so that the conditions $12 \mid a+b $ $n \mid ab $ $12 \mid a $ $12 \mid b $ hold true. Well, if you are searching for the smallest possible values, than you might be able to simply check the first numbers with a simple, however inefficient, computer program. This is a Python example: import itertools for i in itertools.product(range(1,100),range(1,100),range(1,100)): n,a,b = i if ((a + b) % 12 == 0) and ((a * b) % n == 0): if (a % 12 == 0) and (b % 12 == 0): print n,a,b The results are: 1 12 12 1 12 24 1 12 36 1 12 48 1 12 60 1 12 72 ...
How to prove that a connected graph with $|V| -1= |E|$ is a tree? I could neither show myself nor find a proof of the following result: if $G=(V,E)$ is a connected graph with $|E|=|V|-1$ then $G$ is a tree. Could somebody please provide an argument to establish this.
An empty graph on $n$ vertices has $n$ connected components, Suppose you have a graph and add an edge, then the number of connected components is reduced by at most one ( since this edge touches at most two connected components). Therefore a connected graph on $n$ vertices has at least $n-1$ edges). Suppose a connected graph on $n$ vertices has $n-1$ edges, we must prove no cycle exists, suppose it does, if we remove an edge from the cycle we get a connected graph (because if we have a path that uses this edge, instead of using the edge we can go around the remaining part of the cycle). This is a contradiction, since the graph would have $n-2$ edges, and would hence not be connected.
Suppose the graph contains cycles, then we can remove an edge without removing a vertex and the graph is still connected. After removing, it will has $n$ vertices and $n - 2$ edges. Continue to do so until we reach a tree (because there is no cycle) with $n$ vertices and $n - k$ edges $(k &gt; 1)$. However it contradicts with the fact that a tree with $n$ vertices must has $n - 1$ edges. Therefore the graph must not contain cycles, and then it is a tree by definition.
How many positive 3-digit integers can be formed using the digits 0, 1, 2, 3, 4, and 5? My Question says, Using the digits (0, 1, 2, 3, 4, 5), how many positive 3-digit integers can be formed? So my work so far is for positive 3 digit integers you would have the choice of 6 digits for the first digit, the choice of 6 digits for the second digit, and the choice of 6 digits for the third digit: (6)(6)(6) But I'm not sure where to go from here, do I simply multiply them together? Is there another step afterwards? Am I missing something? Please help!
The first digit cannot be $0$. Therefore the answer is $5\times6^2$.
The first digit cannot be $0$. Therefore the answer is $5\times6^2$.