title_body
stringlengths
61
4.12k
upvoted_answer
stringlengths
20
29.9k
downvoted_answer
stringlengths
19
18.2k
Are these simple field extensions normal? Learning for an upcoming exam on basic algebra I came across the following problem: Test for $\alpha = \sqrt{2+\sqrt{2}}$ and $\alpha = \sqrt{1+\sqrt{3}}$ whether the field extension $\mathbb{Q}(\alpha)/\mathbb{Q} $ is normal. I feel, that I should be able to solve it, but I am running against a wall here. Can you explain to me, how one would generally approach this kind of problem? Best Regards
Take the first one, $\alpha =\sqrt {2+\sqrt 2}$. Let's find it's minimal polynomial. We remark that $$\alpha^2=2+\sqrt 2\implies \left(\alpha^2-2\right)^2=2$$ so we have $$\alpha^4-4\alpha^2+2$$ That is effectively quadratic and we can solve to get the roots $$\pm \sqrt {2\pm \sqrt 2}$$. Letting $\beta=\sqrt {2-\sqrt 2}$ we easily see that $$\alpha \beta = \sqrt 2\implies \beta =\frac {\alpha^2-2}{\alpha}\implies \beta\in \mathbb Q(\alpha)$$ The other instance should follow along similar lines.
Since $\mathbb{Q}$ has characteristic zero, you only have to check that $\alpha$'s minimal polynomial has no multiple roots.
How to prove that all odd powers of two add one are multiples of three For example \begin{align} 2^5 + 1 &= 33\\ 2^{11} + 1 &= 2049\ \text{(dividing by $3$ gives $683$)} \end{align} I know that $2^{61}- 1$ is a prime number, but how do I prove that $2^{61}+1$ is a multiple of three?
Since $2 \equiv -1 \pmod{3}$, therefore $2^{k} \equiv (-1)^k \pmod{3}$. When $k$ is odd this becomes $2^k \equiv -1 \pmod{3}$. Thus $2^k+1 \equiv 0 \pmod{3}$.
There is exactly one integer that divides three among any three consecutive integers. Since $2^{61}-1$ is prime, $2^{61}$ trivially is not a multiple of $3$, $2^{61}+1$ is a multiple of $3$.
Two questions in spectral theory: the spectrum of the Fourier transform and the Hamiltonian of the hydrogen atom. I have the following two questions: The Fourier transform defines a unitary (provided that it is normalized properly) map $\hat{\cdot}:L^2(\mathbf{R})\rightarrow L^2(\mathbf{R})$. I figured out its point spectrum, which is very easy; is it possible to determine the whole spectrum of this operator? I know $\sigma_p(\hat{\cdot})=\mu_4(\mathbf{C})$ ($4$-th roots of unity) already; is it the case $\sigma(\hat{\cdot})=\mu_4(\mathbf{C})$ also, maybe because $\sigma_p$ might be dense in $\sigma$? Is there is a concise way (not taking more than two pages say) way to see that the closure $H$ of the Hamiltonian operator of the hydrogen atom (defined on $C^\infty_0$), viz. $$-\frac{1}{2}\Delta-\frac{1}{\|x\|},$$ has domain $H^2(\mathbf{R}^3)$, is self-adjoint, and determine the spectrum. Remarks on 2: I found a reasonably short proof of self-adjointness in Reed/Simon's Methods of modern mathematical physics, vol. II, Thm. X.15. I look forward to your answers.
The spectrum is not as stated. The spectrum of the Hamiltonian for the non-relativistic Hydrogen atom has eigenvalues corresponding to the bound states of Hydrogen, and these are negative. The positive spectrum corresponds to unbound states, and is a continuous spectrum. In spherical coordinates, for a function which depends on the radius $r$ only, one has $$ \begin{align} (-\Delta -\frac{1}{\|x\|})f & = -\Delta f -\frac{1}{r}f \\ & = -\frac{1}{r^{2}}\frac{d}{dr}r^{2}\frac{df}{dr}-\frac{1}{r}f. \end{align} $$ There is a purely radial eigenfunction of this operator which corresponds to the ground state of the Hydrogen atom. This eigenfunction has the form $f(r)=e^{-r/2}$ in this case where the physical constants are missing: $$ \begin{align} (-\Delta-\frac{1}{r})e^{-r/2} & = -\frac{1}{r^{2}}\frac{d}{dr}(r^{2}(-\frac{1}{2})e^{-r/2})-\frac{1}{r}e^{-r/2} \\ & = -\frac{1}{4}e^{-r/2}+\frac{1}{r}e^{-r/2}-\frac{1}{r}e^{-r/2}=-\frac{1}{4}e^{-r/2}. \end{align} $$ Therefore $-1/4 \in \sigma_{P}(-\Delta-\frac{1}{|x|})$. There are lots of other negative eigenvalues and eigenfunctions for this operator. I already answered the first part of your question in the comment section. The spectrum of the Fourier transform is $\{ 1, i, -1, -i \}$. Final Note: I see that you changed the question again. This was a complete and valid answer to the original question, and the answer is not a trivial one. Before that, the last paragraph above was a complete answer to the question. HAMMER, if you don't like the answer, it's best to ask a separate question, rather than invalidate a perfectly good answer that someone has put thought and effort into. You obviously learned something from my answers, and it's not polite to invalidate that in such a way that it makes my answer look completely irrelevant to the topic.
The answer to the first question is simply the following: take $f(z)=z^4$ in the spectral mapping theorem and use $\mathfrak{F}^4=1$.
Significance of negative eccentricity For any conic section, the eccentricity is the ratio of the distance to the focus and directrix. If the eccentricity were defined to be negative, would this have any significant meaning/application, or potentially be linked to complex numbers?
It actually doesn't change anything; if you look at the equations for finding the eccentricity, it is the square root of something, and since there are two square roots of a number, it can be both negative or positive. If you make a plot of a conic section based on the eccentricity, it doesn't change at all. Plot it on desmos graphing calculator using the formula r=$\frac{d}{1-dcos(\theta)}$ where d is the eccentricity (polar coordinates). Nothing happens if you negate the eccentricity. formula for eccentricity
We've been studying this exact question for all of 5 minutes. We have an answer. No. This is why: The formula for eccentricity is $c/a$ which is the distance from the center to a foci divided by the distance from the center to a vertex. If eccentricity were negative, either the numerator or denominator would have to be negative. It is IMPOSSIBLE to have a negative distance for $a$ or $c$. Therefore, eccentricity cannot be negative.
Proving $a^2+b^2=c^2$ where $a,b,c$ are side lengths of a right triangle. Proving $a^2+b^2=c^2$ where $a,b,c$ are side lengths of a right triangle. First, I have never done a proof before, sorry I am so poor here. I have spent many hours but my actions have mostly used wrong ideas. I will only say what I have done that I think is right. If you need my other writing I can write it out but it was mostly about fitting shapes inside the triangles and I think the big mistake is that if the equal statement $a^2+b^2=c^2$ were to be wrong then anything about the inside triangles cannot prove the statement to be wrong because the algebra is connected to the geometry of the triangle and the shapes are inside the geometry of the triangle. Start: I started by drawing a square with sides $d,e,f,g$ all the same as a. The biggest possible right triangle has two sides the same length. This square can fit at least two right triangles inside it both no smaller than the other with one side equal to a. That is true because if you made one smaller than the other and pushed it into one corner of the square then the bigger triangle touching it would overlap the corners of the square which is wrong. So then the sum $d+e+f+g=4a$ is more than $a+b+c$ where $a,b,c$ are the side lengths of one of the triangles and $4a>a+b+c$. Then I square $$\begin{eqnarray*} 16a^2&>& (a+b+c)^2\\ &=&a^2+b^2+c^2+2(ab+bc+ac)\\ \end{eqnarray*}$$ The only thing I can do here is use the equation at the top so if $a^2+b^2=c^2$ isn’t true for the triangle then it’s $a^2+b^2=c^2+z$ for some $z$. But I subtracted that from the expression and it wasn’t wrong which I thought it would have to be since I added the $z$. I guess the extra $z$ can’t show the equation is wrong since any number could be equal to it including those like zero which it is not really equal to atleast if the equation were wrong. But if there is a rule that $z$ isn’t zero included somehow there should be a way to conclude having $z$ not be zero is wrong. But then I checked and found $a,b,c,z$ with $z$ not zero so that the inequality holds so I’m very confused. Maybe the squares are too simple and I should have chosen a rectangle or put squares around the outside of the triangle. I know a book that proves this but I want just some clues whether my actions are appropriate to solving this and otherwise a clue for a new way since I don’t care about it for a class I just like to think about the question. Edit. I have corrected (a+b+c)^2 I think. I am also grateful for the answer below which gives a good clue and just as much as I wished for. It is certainly an interesting choice of shapes and different from what I was doing in that there are no gaps between the shapes. It helps clarify that as I am asking about ways that work I am also asking if an approach using gaps between shapes and estimations rather than exactness could work as an approach (perhaps just because I have spent some hours and wish to recover some value). Is that known? Or perhaps the notion of grouping proofs into approaches is just because the mind is simple and we can't talk about the correctness of approaches only whether one proof we are handed is right or wrong.
This is my favorite proof of the Pythagorean theorem. The algebra for this proof is really easy and straight forward, but it is harder to see the actual proportions that are stated in the Pythagorean theorem, if you want a proof that is clear from a visual stand point then I would look at the Lattice proof. From a geometry standpoint it is important that you note WHY you are able to arrive to your conclusions in your proofs. In the first proof I mention, our proof is dependent on being able to show that the rhombus with sides c is in fact a square. We are able to do this because we are assuming that the sum of the angles in a triangle are 180 degrees, and that is a consequence of Euclid's 5th axiom. This is why the Pythagorean theorem fails for non-euclidean geometries.
This is a special case of the Law of Cosines, namely one where the cosine is zero because the angle between two of the sides is 90 degrees. Go from the Law of Cosines, and when you simplify the specific case of theta = 90 degrees, a^2+b^2=c^2 will line up.
Limit as $x \to \infty$ of $\frac{x^5+x^3+4}{x^4-x^3+1}$ Suppose we have to find the following limit $\lim_{x\to\infty}\frac{x^5+x^3+4}{x^4-x^3+1}$ Now, if we work with the De L'Hopital rule with successive differentiations we get $L=+\infty$ But if we work like this instead: $$L=\lim_{x\to\infty}\frac{x^5(1+\frac{1}{x^2}+\frac{4}{x^5})}{x^5(\frac{1}{x}-\frac{1}{x^2}+\frac{1}{x^5})}$$ then $L$ does not exist. What is correct and what is false here? I'm a little confused.
First you should avoid division by 0 when calculating limit, as it will tell nothing about the limit. $\frac{1}{0}$ is not infinity, it is undefined. $$\lim_{x\to 0}{\frac{1}{x}}$$ is different scenario. Here you get limit is infinity. In your example you have to apply L'Hopital's rule $4$ times to get $$\lim_{x\to \infty}{\frac{120x}{24}}=\infty$$
First of all, $\frac 1 0 = \infty$, meaning if $f(x)\to 1$ and $g(x)\to 0$ (and $g(x)$ is positive) then $\frac {f(x)} {g(x)}\to +\infty$. It's not an indeterminate form. However, even if it were an indeterminate form, that's not a problem. Obtaining an indeterminate form doesn't mean the limit doesn't exist, it just means "the method you used doesn't work, try something else". For example, the constant function $1$ definitely has a limit as $x\to\infty$, but if I do something crazy like rewrite it as $\frac x x$, then I get the indeterminate form $\frac \infty \infty$. Every limit can be made to look like an indeterminate form if you try to calculate it in the right way.
Power series representation of $f(x) = 3x^2 - (x^2 + 1)\ln(1 - x^2) - 2x \ln \left( \frac{1+x}{1-x} \right)$ We consider the power series: $$ f(x) = \sum_{n=1}^{+ \infty} \frac{1}{n(n+1)(2n+1)}.x^{2n+2} $$ Prove that: $$ f(x) = 3x^2 - (x^2 + 1)\ln(1 - x^2) - 2x \ln \bigg( \frac{1+x}{1-x} \biggr) $$ Starting from the second expression I get : $$ f(x) = 3x^2 - (x - 1)^2 \ln(1 - x) - (x + 1)^2 \ln(1 + x) $$ Using power series of $\ln(1 + x)$ and $ \ln(1 - x)$ does not lead to the result wanted. I do not know how to proceed to get the first expression.
Partial fraction decomposition of the coefficient yields to $$\frac1{n(n+1)(2n+1)}=\frac1n+\frac1{n+1}-\frac4{2n+1}$$ Therefore we may write the original series as $$\sum_{n=1}^\infty\left[\frac1{n(n+1)(2n+1)}\right]x^{2n+2}=\sum_{n=1}^\infty\left[\frac1n+\frac1{n+1}-\frac4{2n+1}\right]x^{2n+2}$$ The easiest way from hereon is to recognize well-known series representations. To be precise we can further conclude that \begin{align*} &(1)&&\sum_{n=1}^\infty\frac{x^{2n+2}}{n}=x^2\sum_{n=1}^\infty\frac{x^{2n}}{n}=x^2(-\log(1-x^2))\\ &(2)&&\sum_{n=1}^\infty\frac{x^{2n+2}}{n+1}=\sum_{n=2}^\infty\frac{x^{2n}}{n}=-\log(1-x^2)+x^2\\ &(3)&&-4\sum_{n=1}^\infty\frac{x^{2n+2}}{2n+1}=-2x\left[2\sum_{n=1}^\infty\frac{x^{2n+1}}{2n+1}\right]=-2x\left[\log\left(\frac{1+x}{1-x}\right)-x\right] \end{align*} Now combining these three series we can easily see that $$\therefore~f(x)=\sum_{n=1}^\infty\left[\frac1{n(n+1)(2n+1)}\right]x^{2n+2}=3x^2-(x^2+1)\log(1-x^2)-2x\log\left(\frac{1+x}{1-x}\right)$$ Of course, as HAMIDINE SOUMARE pointed out within his answer, we have to check beforehand whether these series converge or not and therefore if we are even allowed to split up the sum like this. However, I will leave this to you. EDIT Important to notice is that the given logarithm series starts at $n=0$ and not at $n=1$. Thus, we have to consider the first summand extra. Moreover I realised that I made two contrary mistakes which overall did not affect the solution. To be precise my given equations $(2)$ and $(3)$ both contain an error I which I will correct now. For $(2)$ it should rather be $$(2)~~~\sum_{n=1}^\infty\frac{x^{2n+2}}{n+1}=\sum_{n=2}^\infty\frac{x^{2n}}{n}=-\log(1-x^2)\color{red}{-x^2}$$ Hence the series expansion of $\log(1-x^2)$ is given by $$\log(1-x^2)=-\sum_{\color{blue}{n=1}}^\infty \frac{x^{2n}}n=-x^2-\sum_{n=2}^\infty \frac{x^{2n}}n\implies \sum_{n=2}^\infty \frac{x^{2n}}n=-\log(1-x^2)-x^2$$ From hereon I could clear my doubts concerning the last sum. The sum should correctly be given by $$(3)~~~-4\sum_{n=1}^\infty\frac{x^{2n+2}}{2n+1}=-2x\left[2\sum_{n=1}^\infty\frac{x^{2n+1}}{2n+1}\right]=-2x\left[\log\left(\frac{1+x}{1-x}\right)\color{red}{-2x}\right]$$ Rather easy this can justified by exploiting the series expansion of $\log\left(\frac{1+x}{1-x}\right)$, which the same as $2\arctan(x)$, given by $$\log\left(\frac{1+x}{1-x}\right)=2\sum_{\color{blue}{n=0}}^\infty\frac{x^{2n+1}}{2n+1}=2x+2\sum_{n=1}^\infty\frac{x^{2n+1}}{2n+1}\implies 2\sum_{n=1}^\infty\frac{x^{2n+1}}{2n+1}=\log\left(\frac{1+x}{1-x}\right)-2x$$ Now note that by adding up these three sums we obtain a closed-form expression of the original sum, to be precise \begin{align*} f(x)&=\underbrace{[x^2(-\log(1-x^2))]}_{(1)}+\underbrace{[-\log(1-x^2)-x^2]}_{(2)}+\underbrace{\left[-2x\left[\log\left(\frac{1+x}{1-x}\right)-2x\right]\right]}_{(3)}\\ &=[-x^2+4x^2]+[-x^2\log(1-x^2)-\log(1-x^2)]+\left[-2x\log\left(\frac{1+x}{1-x}\right)\right]\\ &=3x^2-(x^2+1)\log(1-x^2)-2x\log\left(\frac{1+x}{1-x}\right) \end{align*} Which is, in fact, the desired solution this time correctly derived.
Hint: $\frac{1}{n(n+1)(2n+1}=\frac{a}{n}+\frac{b}{n+1}+\frac{c}{2n+1}$. Find $a$, $b$ and $c$. Rewrite your series as sum of such series, justify convergence of those series and compute to get the needed function. Recall: $\sum_{n=1}^{\infty}{\frac{t^n}{n}}=\log(1-t), \forall t, |t|<1$
Prove the following set of vectors is a subspace Let $\,S\,$ be the set of vectors in $\,\mathbb R^3$ that are of the form $\;\begin{bmatrix} a \\ b \\ a-2b \\ \end{bmatrix} \;$ where $a$ and $b$ are real numbers. Prove $\,S\,$ is a subspace of $\,\mathbb R^3$. So I know in order to show that a set of vectors is a subspace the $0$ vector must be an element of the set, vector addition must be closed, and scalar multiplication must be closed. I started off showing the $0$ vector is an element of the set by having: $\;0\cdot a+0\cdot b+0\cdot \left(a-2b\right)=0\,$ which it does so that part is proven. Now how do I show the closure? For addition I wrote: \begin{align} a &= c_1a+c_2b+c_3\left(a-2b\right) \\ b &= d_1a+d_2b+d_3\left(a-2b\right) \end{align} and then combined $\,a+b\,$ to be $\,\left(c_1+d_1\right)a + \left(c_2+d_2\right)b + \left(c_3+d_3\right)\left(a-2b\right)\,$ and since these are linear combinations of one another it is closed under addition? Is this correct? And how do I show closed under scalar multiplication?
Let $\vec{v}_1, \vec{v}_2$ be two arbitrary vectors of this form: $$ \vec{v}_1=(a_1,b_1,a_1-2b_1)^T,\\ \vec{v}_2=(a_2,b_2,a_2-2b_2)^T. $$ What we need to show is that their arbitrary linear combination $x\vec{v}_1+y\vec{v}_1$ is again of such form: $$ x\vec{v}_1+y\vec{v}_1=(xa_1+ya_2,xb_1+yb_2,x(a_1-2b_1)+y(a_2-2b_2))^T=(xa_1+ya_2,xb_1+yb_2,xa_1+ya_2-2(xb_1+yb_2))^T $$
You seem to be a little confused. You need to show that $$S=\{\begin{pmatrix} a\\ b\\a-2b \end{pmatrix}:a,b\in \mathbb R\}$$ is a subspace. The zero vector $\begin{pmatrix} 0\\ 0\\0 \end{pmatrix}$ is in $S$ since you may let $a=b=0$ to obtain it. Closure law holds as $$\begin{pmatrix} a_1\\ b_1\\a_1-2b_1 \end{pmatrix}+\begin{pmatrix} a_2\\ b_2\\a_2-2b_2 \end{pmatrix}=\begin{pmatrix} a_1+a_2\\ b_1+b_2\\a_1+a_2-2(b_1+b_2) \end{pmatrix}\in S$$ by letting $a=a_1+a_2$ and $b=b_1+b_2$. Similarly for scalar multiplication you may select $a$ to be $\alpha a$ and $b$ to be $\alpha b$.
Deduction of $\forall y \exists x (x=y)$ Give a deduction of $\forall y \exists x (x=y)$ My thinking is I can prove $\exists x(x=y)$ then use generalization theorem, which is equivalent to prove $\neg \forall x \neg (x=y)$ but I got stuck proving this... thanks in advance Sorry that I did not specify what deduction system that I want to use here, it is the deduction sequence where ai is either Axiom or original hypothesis or obtained from MP
Hint: $$\matrix{ y=y \\ \hline \exists x.x=y \\ \hline \forall y.\exists x.x=y } $$
For any arbitrary value $v$, because we always witnesses $v=v$ by the Law of Identity, therefore ... Since this is so for any arbitrary value , then we deduce ... $\blacksquare$.
Prove that if $a, b, c$ are positive odd integers, then $b^2 - 4ac$ cannot be a perfect square. Prove that if $a, b, c$ are positive odd integers, then $b^2 - 4ac$ cannot be a perfect square. What I have done: This has to either be done with contradiction or contraposition, I was thinking contradiction more likely.
If $b^2-4ac$ was a perfect square then the polynomial $ax^2+bx+c$ would have some rationals $\frac {p_1}{q_1}, \frac {p_2}{q_2}$ as roots($\frac{p_i}{q_i}=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$). Therefore $(q_1x-p_1)(q_2x-p_2)=ax^2+bx+c$. So $q_1,q_2,p_1,p_2$ are odd integers (since $q_1q_2=a,p_1p_2=c$) and $q_1p_2+q_2p_1=-b\Rightarrow\Leftarrow.$
If we let $a$, $b$, and $c$ be $2x+1$, $2y+1$, and $2z+1$ respectively, We will get a general from of a multiple of $8$ minus $3$. Since one portion of our from can have an $8$ factored out and the other portion be $4$ multiplied by the product of consecutive integers which is $4$ multiplied by an even number, giving us a multiple of $8$. So our expression is $-3 \pmod 8$ or $5 \pmod 8$, but note that any number squared is $0, 1, \text { or } 4 \pmod 8$. This is because any number is $0, 1, 2, 3, \dots \text { or } 7 \pmod 8$. Squaring all those values $\pmod 8$ will satisfy the statement. And so we are done.
Why can ALL quadratic equations be solved by the quadratic formula? In algebra, all quadratic problems can be solved by using the quadratic formula. I read a couple of books, and they told me only HOW and WHEN to use this formula, but they don't tell me WHY I can use it. I have tried to figure it out by proving these two equations are equal, but I can't. Why can I use $x = \dfrac{-b\pm \sqrt{b^{2} - 4 ac}}{2a}$ to solve all quadratic equations?
I would like to prove the Quadratic Formula in a cleaner way. Perhaps if teachers see this approach they will be less reluctant to prove the Quadratic Formula. Added: I have recently learned from the book Sources in the Development of Mathematics: Series and Products from the Fifteenth to the Twenty-first Century (Ranjan Roy) that the method described below was used by the ninth century mathematician Sridhara. (I highly recommend Roy's book, which is much broader in its coverage than the title would suggest.) We want to solve the equation $$ax^2+bx+c=0,$$ where $a \ne 0$. The usual argument starts by dividing by $a$. That is a strategic error, division is ugly, and produces formulas that are unpleasant to typeset. Instead, multiply both sides by $4a$. We obtain the equivalent equation $$4a^2x^2 +4abx+4ac=0.\tag{1}$$ Note that $4a^2x^2+4abx$ is almost the square of $2ax+b$. More precisely, $$4a^2x^2+4abx=(2ax+b)^2-b^2.$$ So our equation can be rewritten as $$(2ax+b)^2 -b^2+4ac=0 \tag{2}$$ or equivalently $$(2ax+b)^2=b^2-4ac. \tag{3}$$ Now it's all over. We find that $$2ax+b=\pm\sqrt{b^2-4ac} \tag{4}$$ and therefore $$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}. \tag{5}$$ No fractions until the very end! Added: I have tried to show that initial division by $a$, when followed by a completing the square procedure, is not a simplest strategy. One might remark additionally that if we first divide by $a$, we end up needing a couple of additional "algebra" steps to partly undo the division in order to give the solutions their traditional form. Division by $a$ is definitely a right beginning if it is followed by an argument that develops the connection between the coefficients and the sum and product of the roots. Ideally, each type of proof should be presented, since each connects to an important family of ideas. And a twice proved theorem is twice as true.
If one is interested only in the roots one can concentrate on the monic equations. Now any monic quadratic expression becomes the square of a linear expression by addition of a suitable constant (finding what the suitable constant is called completing the square). The additive constant is uniformly the same expression in the coefficients of the given quadratic equation. So the given equation has the same solutions as the modified equation of the form $(x+B)^2=A$. SO the formula for the roots are of the form $x=-B \pm \sqrt A$. Thats the reason all quadratic equations have solutions expressed in a uniform manner.
Not complete but minimal sufficient statistic Let $X =(X_1,\ldots, X_n), ~ X_i \mathrm{iid} \sim \mathcal N ( \theta, \theta^2), ~ \theta \in \Theta = \mathbb R \setminus \{0\}, ~ T(X)=(\sum_{i=1}^n X_i, \sum_{i=1}^n X_i^2)$. I figured out that $T(X)$ is sufficient. To show that it's not complete I checked the function $g$ with $g(u,v) = 2u^2 - (n+1)v$ and noticed that $E_{\theta}[g(T(X))] = 0 ~ \forall \theta \in \Theta$ - but why is $P_{\theta}(g(T(X)) = 0) < 1$? And how to show that $T(X)$ is minimal sufficient? I know that one can choose a subfamiliy $\mathcal P_0 \subset \mathcal P = \{f_{\theta} \mid \theta \in \Theta \}$ where $P$ is a family of densities with the same support and that's enough to show that $T$ is minimal sufficient for the subfamiliy. And if we have such a subfamily, $T^{*}(X) = \left(\frac{f_{\theta_1}(x)}{f_{\theta_0}(x)}, \ldots, \frac{f_{\theta_k}(x)}{f_{\theta_0}(x)}\right)$ is minimal sufficient (for $\Theta = \{\theta_0, \ldots, \theta_k\}$). But how to apply on the present case?
G(x) = (n+1)/2 ∑xi^2 – (∑xi )^2 E(G(x)) = 0 doesn't mean G(x)= 0 hence not complete
G(x) = (n+1)/2 ∑xi^2 – (∑xi )^2 E(G(x)) = 0 doesn't mean G(x)= 0 hence not complete
Find the limit : $\lim _{x \to 0}x^{\sqrt{x}}=?$ Find the limit : $$\lim _{x \to 0}x^{\sqrt{x}}=?$$ My Try : $$f(x)=x^{\sqrt{x}}$$ $$\ln f(x)= \sqrt{x}\ln x$$ Now what ?
You have added the tag limit-without-lhospital, therefore my answer is according to that. Let $\sqrt{x}=u$. Thus your limit $$\lim_{x \to 0} \sqrt x \ln x= \lim_{u \to 0} u \ln u^2= 2 \lim_{u \to 0} u \ln u $$ Thus, all you need to find is $\displaystyle \lim_{u \to 0^+}u\ln(u)$. Let $u=e^{-t}$ and note that as $u \to 0^+$, we have $t \to \infty$. Hence, $$L = \lim_{u \to 0} u \ln(u) = \lim_{t \to \infty} -te^{-t} = -\lim_{t \to \infty} \dfrac{t}{e^t}$$ Now recall that $e^t \geq \dfrac{t^2}2$. Hence, we have $$\lim_{t \to \infty} \dfrac{t}{e^t} \leq \lim_{t \to \infty} \dfrac 2t = 0$$ Thus you get $$\color{red}{\lim_{x \to 0} \sqrt x \ln x= 0}$$
Use L'Hopital: $$\lim_{x\to0} \sqrt{x}\ln x = \lim_{x\to0} \frac{\ln x}{\frac{1}{\sqrt{x}}} = \lim_{x\to0} \frac{\frac1x}{-\frac1{2x^{3/2}}} = -2\lim_{x\to 0} \sqrt{x} = 0$$ Now you have $$\lim_{x\to 0} x^{\sqrt{x}} = \lim_{x\to 0} e^{\sqrt{x}\ln x} = e^0 = 1$$ using the continuity of exponentiation.
Solving an equaiton which includes $log$ as both base and exponent Q: If $$9x = x^{\log_3x}$$ then what is $x$ ? I can't solve it. I have tried to use identities in my book but i think they are useless for this question. I need a hint
Hint: Try taking logarithm to the base $3$ on both sides. You'll get a quadratic in $\log_3x$.
Take $\log_3$ of both sides. Then $$ \log_3{9} + \log_3{x} = (\log_3{x}) (\log_3{x}), $$ using $\log{ab}=\log{a}+\log{b}$ and $\log{a^b} = b\log{a}$. This is a quadratic equation for $\log_3{x}$. Solve that and exponentiate $3$ with the answer to find $x$.
Prove that if $P(A) = P(B) = \frac23$ , then $P(A|B) ≥ \frac12$ Prove that if $P(A) = P(B) = \dfrac23$ , then $P(A|B) ≥ \dfrac12$. Well I thought that because $P(A) + P(B)> 0$, then they are independent. So I used $P(A|B)= P(A)$ which I can use due to independence. However I have doubts because to be independent they don't necessarily have to be bigger than $0$ but I see no other reason or way to prove that that given probability is bigger than $1/2$.
Hint: Here are the relations which you should use $P(A\cup B)=P(A)+P(B)-P(A\cap B)$ $P(A\cap B)=P(A|B)\cdot P(B)$ $P(A\cup B)\leq 1$ All it is left is to put them together.
We have P(A) = P(B) = 2/3. Take P(A|B) and P(A|not B). We know that P(A) = P(B) * P (A|B) + P(not B) * P(A|not B). 2/3 = 2/3 * P(A|B) + 1/3 * P(A|not B) We have 0 ≤ P (A|not B) ≤ 1. Therefore 2/3 * P(A|B) + 1/3 * 0 ≤ 2/3 ≤ 2/3 * P(A|B) + 1/3 * 1 P(A|B) ≤ 1 ≤ P(A|B) + 0.5 0.5 ≤ P(A|B) ≤ 1. (Intuitively: In 2/3rds of all cases we have B, in 1/3rd we have not B. If we don't have A in at least half the cases where B, the numbers of cases with A don't add up to 2/3rds, even if we have A in all cases where not B. )
Is there a simple way to prove the Four Colour Theorem? The four colour theorem says that: Given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colours are required to colour the map so that no two adjacent regions have the same colour. From http://en.wikipedia.org/wiki/Four_colour_theorem The theorem has been proved using computers. But I am wondering, is there a more simple proof than using computers to do it? I feel that using brute force to prove something is like the last resort. So, is there a simple proof of the Four Colour Theorem, or is it yet to be proved?
A completely computer-free proof has yet to exist. However there are easy proofs for the five(and up) color theorems.
The four colour theorem was supposedly proved by 'Appel and Haken' by using a computer as a tool. They used the tool to supposedly go through every example possible. However there is no 'proof' that the proof actually happened and exists. I guess it can be proved by hand but it would take you years.. So the answer is no there is no simple proof of the four colour theorem.
How do I write down a curve with exactly one rational point Let $g\geq 1$. I would like to write down (for all $g$) a smooth projective geometrically connected curve $X$ over $\mathbf{Q}$ of genus $g$ with precisely one rational point. Is this possible? For which $g$ is this possible? I think for $g=1$ this is possible. I just don't know an explicit equation, but I should be able to find it. (We just write down an elliptic curve without torsion of rank zero over $\mathbf{Q}$.) For $g\geq 2$ things get more complicated for me. I would really like the curve to be of gonality at least $4$, but I'll think about that later.
The smooth plane projective curve $E$ defined over $\mathbb Q$ by the equation $y^2z=x^3+2z^3$ has its point at infinity $[0:1:0]$ as its only rational point: $E(\mathbb Q)=\lbrace [0:1:0]\rbrace $. Indeed: a) The torsion group of the curve $y^2z=x^3+az^3 $ is zero as soon as $a$ is a sixth-power free integer which is neither a square nor a cube nor equal to $-432$. (Despite appearences I'm not making this crazy theorem up, but I am quoting theorem (3.3) of Chapter 1 in Husemöller's Elliptic Curves !). b) On the other hand the curve $E$ has rank $0$, which means that its group of rational points is torsion (this is stated in the table following the theorem I just quoted). The two results a) and b) prove the assertion in my introductory sentence.
Harry When x is rational, the y coordinates are not rational for the majority of the x values. If you look at (x^2 + 5) mod (6 - x) the value is only 0, at x = 5; similarly for the other example (x^2 + 203) mod (114 -x) only as 0 at x = 47 and x = 113. It is the same for larger numbers(100 of digits).
To show that group G is abelian if $(ab)^3 = a^3 b^3$ and the order of $G$ is not divisible by 3 Let $G$ be a finite group whose order is not divisible by $3$. suppose $(ab)^3 = a^3 b^3$ for all $a,b \in G$. Prove that $G$ must be abelian. Let$ $G be a finite group of order $n$. As $n$ is not divisible by $3$ ,$3$ does not divide $n$ thus $n$ should be relatively prime to $n$. that is gcd of an $n$ should be $1$. $n = 1 ,2 ,4 ,5 ,7 ,8 ,10 ,11, 13 ,14 ,17,...$ further I know that all groups upto order $5$ are abelian and every group of prime order is cyclic. when it remains to prove the numbers which are greater than $5$ and not prime are abelian. Am I going the right way? please suggest me the proper way to prove this.
First note that given condition says that $ f: G \to G$ defined as $x \to x^3$ is an injective homomorphism of $G$. Further Note that $$ \forall a,b \in G: \quad ababab = (ab)^{3} = a^{3} b^{3} = aaabbb. $$ Hence, $$ \forall a,b \in G: \quad baba = aabb, \quad \text{or equivalently}, \quad (ba)^{2} = a^{2} b^{2}. $$ Using this fact, we obtain \begin{align} \forall a,b \in G: \quad (ab)^{4} &= [(ab)^{2}]^{2} \\ &= [b^{2} a^{2}]^{2} \\ &= (a^{2})^{2} (b^{2})^{2} \\ &= a^{4} b^{4} \\ &= aaaabbbb. \end{align} On the other hand, \begin{align} \forall a,b \in G: \quad (ab)^{4} &= abababab \\ &= a (ba)^{3} b \\ &= a b^{3} a^{3} b \\ &= abbbaaab. \end{align} Hence, for all $ a,b \in G $, we have $ aaaabbbb = abbbaaab $, which yields $$ f(ab) = a^{3} b^{3} = b^{3} a^{3} = f(ba). $$ As $ f $ is injective, we conclude that $ ab = ba $ for all $ a,b \in G $.Hence $G$ is an abelian group Added: I think it's worth mentioning that there exist nonabelian group $G$ for which $x \to x^3$ is a group homomorphism.Smallest such example is Heisenberg group of order $27$ which can be thought of as all $3 \times 3$ upper diagonal matrices with $1's$ on the diagonal and other entries in the field of order $3$.As $G$ is of exponent 3 ( i.e. $x^3=1$ for all $x \in G$) hence $f$ is a homomorphism and $G$ is clearly non abelian because for example following two matrices don't commute: $$\left(\begin{array}{ccc} 1 & 1 & 0\\ 0 & 1 & 2\\ 0 & 0 & 1 \end{array}\right),$$ $$\left(\begin{array}{ccc} 1 & 1 & 0\\ 0 & 1 & 1\\ 0 & 0 & 1 \end{array}\right)$$ In particular this also shows that the condition that $3$ does not divide order($G$) is necessary.
I have followed the similar method as previous answer with a few changes(without defining the f, thought it would be easier:) ) As $(ab)^3=a^3b^3$ for all $a,b\in G$ We have $$ababab=aaabbb$$ $$\Rightarrow baba=aabb$$ $$\Rightarrow (ba)^2=a^2b^2$$ Consider, $$(ab)^4=((ab)^2)^2$$ $$=(b^2a^2)^2$$ $$=a^4b^4$$ $$=aaaabbbb$$ Also $$(ab)^4=abababab$$ $$=a(ba)^3b$$ Therefore, we get $aaaabbbb=a(ba)^3b$ $$\Rightarrow (ba)^3=(ab)^3$$ $$\Rightarrow (ab)^{-3}(ba)^3=e$$ Where e is identity in G $$\Rightarrow [(ab)^{-1}(ba)]^3=e$$ Now for $x=(ab)^{-1}(ba)$ ,$|x|$ divides 3 by which $|x|$ can be 3 or 1. Now $|x|$ can not be 3 (as by Lagrange's theorem if $|x|$=3 then 3 divides |G| which is not true.) Thus, $|x|$=1 $$\Rightarrow x=e$$ $$\Rightarrow (ab)^{-1}(ba)=e$$ Multiplying by ab from left, $$\Rightarrow ba=ab$$ for all $a,b\in G$ Thus, G is abelian
Prove that if $a(a^{pq-p-q+2} -b) $ is not divisible by $pq,$ then $a-b$ is not divisible by $pq$. Let $a$ and $b$ be positive integers and p and q be two distinct primes. Prove that if $a(a^{pq-p-q+2} -b) $ is not divisible by $pq,$ then $a-b$ is not divisible by $pq$. I was trying to do a proof by contradiction: Suppose that $pq\mid a-b$. We have that $p\mid a-b$ and $q\mid a-b$ and so: $a \equiv b \pmod{pq} \\ a \equiv b \pmod{p} \\a \equiv b \pmod{q}$ I notice that I can refactor it as $a((a^{q-1})^{p}(a^{-q})(a^2) -b))$ or $a((a^{p-1})^{q}(a^{-p})(a^2) - b))$, so possibly Fermat's little theorem could be helpful. Then I am trying to show that $a(a^{pq-p-q+2} -b) \equiv a_1 \pmod{p} \\ a(a^{pq-p-q+2} -b) \equiv a_2 \pmod{q}.$ Then I use the Chinese Remainder Theorem to show that $a(a^{pq-p-q+2} -b) \equiv 0 \pmod{pq}$ but I have not had any success with doing this, any help is appreciated.
Contrapositive is: $\bmod m\!=\!pq\!:\,\ a\equiv\color{#c09} b\,\Rightarrow\, 0\equiv a(a^{\large \phi(m)+1}\!-\color{#c00}b)\equiv \color{#0a0}{a^{\large 2}}(a^{\large \phi(m)}\!-1),\, $ which is true more generally for all integers $m$ whose prime factors occur to power at most $\rm\color{#0a0}{two},$ by Theorem $ $ Suppose that $\ m\in \mathbb N\ $ has the prime factorization $\:m = p_1^{\large e_{1}}\cdots\:p_k^{\large e_k}\ $ and suppose also that for all $\,i,\,$ $\ \color{#0a0}{e_i\le e}\ $ and $\ \phi(p_i^{\large e_{i}})\mid f.\ $ Then $\ m\mid \color{#0a0}{a^{\large e}}(a^{\large f}-1)\ $ for all $\: a\in \mathbb Z.$ Proof $\ $ Notice that if $\ p_i\mid a\ $ then $\:p_i^{\large e_{i}}\mid \color{#0a0}{a^{\large e}}\ $ by $\ \color{#0a0}{e_i \le e}.\: $ Else $\:a\:$ is coprime to $\: p_i\:$ so by Euler's phi theorem, $\!\bmod q = p_i^{\large e_{i}}:\, \ a^{\large \phi(q)}\equiv 1 \Rightarrow\ a^{\large f}\equiv 1\, $ by $\: \phi(q)\mid f\, $ and modular order reduction. Thus since all prime powers $\ p_i^{\large e_{i}}$ divide $\, a^{\large e} (a^{\large f} - 1)\ $ so too does their lcm = product = $m$. Examples $\ $ You can find many illuminating examples in prior questions, e.g. below $\qquad\qquad\quad$ $24\mid a^3(a^2-1)$ $\qquad\qquad\quad$ $40\mid a^3(a^4-1)$ $\qquad\qquad\quad$ $88\mid a^5(a^{20}\!-1)$ $\qquad\qquad\quad$ $6p\mid a\,b^p - b\,a^p$
$pq-q-p+2=p(q-1)-q+2=(q-1)(p-1)+1$ If $a=b$ mod $p$ and $a=b$ mod $q$ Little fermat applied to $p$ implies that $a^{pq-q-p+2}=a^{(p-1)(q-1)+1}=a^{q-1+1}=a$ $mod$ $p$, this implies that $a^{pq-p-q+2}-b=a^{pq-p-q+2}-a=0$ $mod$ $p$, a similar argument shows that $a^{pq-p-q+2}-b=0$ $mod$ $q$.
Prove that $\frac{100!}{50!\cdot2^{50}} \in \Bbb{Z}$ I'm trying to prove that : $$\frac{100!}{50!\cdot2^{50}}$$ is an integer . For the moment I did the following : $$\frac{100!}{50!\cdot2^{50}} = \frac{51 \cdot 52 \cdots 99 \cdot 100}{2^{50}}$$ But it still doesn't quite work out . Hints anyone ? Thanks
$$ \frac{(2n)!}{n! 2^{n}} = \frac{\prod\limits_{k=1}^{2n} k}{\prod\limits_{k=1}^{n} (2k)} = \prod_{k=1}^{n} (2k-1) \in \Bbb{Z}. $$
for the number of multiples of $2$: $$100 = 52 + 2(n_1 - 1)$$ $$n_1 = 25$$ for the number of multiples of $4$: $$100 = 52 + 4(n_2 - 1)$$ $$n_2 = 13$$ for the number of multiples of $8$: $$96 = 56 + 8(n_3 - 1)$$ $$n_3 = 6$$ for the number of multiples of $16$: $$96 = 64 + 16(n_4 - 1)$$ $$n_4 = 3$$ for the number of multiples of $32$: $$96 = 64 + 32(n_5 - 1)$$ $$n_5 = 2$$ for the number of multiples of $64$: $$64 = 64 + 64(n_6 - 1)$$ $$n_6 = 1$$ To get the number of $2$'s as factors, add all the $n$'s up, which yields $50$. This should cancel the $2^{50}$ at the denominator, and thus prove that the expression is an integer.
Find the remainder when $10^{400}$ is divided by 199? I am trying to solve a problem Find the remainder when the $10^{400}$ is divided by 199? I tried it by breaking $10^{400}$ to $1000^{133}*10$ . And when 1000 is divided by 199 remainder is 5. So finally we have to find a remainder of : $5^{133}*10$ But from here I could not find anything so that it can be reduced to smaller numbers. How can I achieve this? Is there is any special defined way to solve this type of problem where denominator is a big prime number? Thanks in advance.
You can use Fermat's little theorem. It states that if $n$ is prime then $a^n$ has the same remainder as $a$ when divided by $n$. So, $10^{400} = 10^2 (10^{199})^2$. Since $10^{199}$ has remainder $10$ when divided by $199$, the remainder is therefore the same as the remainder of $10^4$ when divided by $199$. $10^4 = 10000 = 50*199 + 50$, so the remainder is $50$.
Since 199 is prime and gcd(10,199) = 1 So, $10^{198} \equiv 1 (mod 199)$ Squaring the both side: $10^{396} \equiv 1 (mod 199)$ Now: $10^3 \equiv 5(mod199)$ $10^{4} \equiv 50 (mod 199)$ $10^{400} \equiv 50 (mod 199)$ So, the remainder is 50. This method is known as Euler's Totient
If $a < b$ and $b = \infty$, then $a < \infty$? A very simple question, but I am not sure for this moment. I have a strict inequality $a &lt; b$. And I prove that $b = \infty$, say $b$ is an integral. Does this prove that $a &lt; \infty$, that is, $a$ is finite? Question seems simple but things can easily be messed up with infinity. To be more clear, say $a = \int f dx$ and $b = \int g dx$ and I have $a &lt; b$. If $\int g dx = \infty$, can we say that $\int f dx &lt; \infty$? If one can construct pathological counterexamples involve limits etc, I will be very happy.
Talking about $\int fdx$ makes no sense, as that is a concrete function, and not a number. You might want to talk about the definite integral, $\int_0^\infty fdx$ and $\int_0^\infty gdx$. You may also want to discern between showing $\int_0^\infty fdx&lt;\int_0^\infty gdx$ (in which case your claim would be correct), and the case where $f(x)&lt;g(x)$ for all $x$ in the domain over which you are integrating. In the latter case take $f(x)=1$ and $g(x)=2$.
This assumption is only correct if $ (a,b) \in \mathbb{R}\times\mathbb{R} $. With $\infty$ in the game, this is no more correct. The concept of $\infty$ has some interesting properties among which is a lot of fluidness. Actually: $\infty \lt \infty$, $\infty /\infty \ne 1$ and bunch of other crazy things are possible with $\infty$.
$\lim_{x\to 0} \left(\frac{1}{1-\cos x}-\frac{2}{x^2}\right)$ Find the limits : $$\lim_{x\to 0} \left(\frac{1}{1-\cos x}-\frac{2}{x^2}\right)$$ My Try : $$\lim_{x\to 0} \left(\frac{1}{1-\cos x}-\frac{2}{x^2}\right)=\lim_{x\to 0}\frac{x^2-2(1-\cos x)}{x^2(1-\cos x)}$$ Now what do I do ?
Let $x=2y$ $$\lim_{x\to 0} \left(\frac{1}{1-\cos x}-\frac{2}{x^2}\right)$$ $$=\dfrac12\lim_{y\to0}\left(\dfrac1{\sin^2y}-\dfrac1{y^2}\right)$$ $$=\dfrac12\lim_{y\to0}\left(1+\dfrac{\sin y}y\right)\lim_{y\to0}\left(\dfrac{y-\sin y}{y^3}\right)\left(\lim_{y\to0}\dfrac y{\sin y}\right)^2$$ Now use Are all limits solvable without L&#39;H&#244;pital Rule or Series Expansion for the middle limit
$$\lim_{x \to 0}\frac{1}{1-(1-\frac{x^2}{2!}+O(x^4))} - \frac{2}{x^2} = \frac{2}{x^2}-\frac{2}{x^2} = 0$$(higher order terms neglected as the become very small as $x \to0$)
How to prove non-regularity of a language from the non-regularity of another language? How can I prove that $L_1=\{a^nb^m\mid n\ne m\}$ is not regular based on the fact that the language $L_2=\{a^nb^n\mid n\in\Bbb N\}$ is not regular? Thank you
HINT: If $L$ is a regular language over the alphabet $\Sigma$, then $\Sigma^*\setminus L$, the complementary language consisting of words in $\Sigma^*$ that are not in $L$, is also regular. (This is quite easily proved using DFAs.)
Here is some facts: 1)A language L is regular if and only if L-complement is regular. 2)Intersection of two regular languages is regular. In your question, L1-complement is L2 union all string containing the sub-string 'ba'. L2= L1-complement intersect with 'a* b*'. We know that 'a* b*' is regular(Because we gave an explicit regular expression for it) . Therefore, L1 is regular implies L2 is regular by the above two facts contradiction. Thus, L1 must not have been regular.
How to prove that the limit of a function is $0$ at every point I have the following function defined on $\mathbb{R}$: $$f(x) = \begin{cases} 0 &amp; \text{if $x$ irrational} \\ 1/n &amp; \text{if $x = m/n$ where $m, n$ coprime} \end{cases}$$ I want to show that $f$ is continuous at every irrational point, and has a simple discontinuity at every rational point. I was able to show the first and partially the second (I showed that $f$ has a discontinuity at every rational point, but got stuck on showing that the discontinuity is simple). However, I've realized that I can simply show that $\lim_{t \rightarrow x}f(t) = 0$ for every $x$, and both of the things I want to show follow from this. How can I show this?
Take any $x$. Let $\epsilon &gt; 0$, and find $N$ such that $\frac1 N &lt; \epsilon$. Now, I claim that there is a $\delta&gt;0$ such that if $y \in (x-\delta,x+\delta) \backslash \{ x\}$, then $y$ is not of the form $\frac mN$ for any integer $m$. Suppose not. Then, considering $\delta$ going to zero and repeatedly contradicting the statement, we get a sequence $y_n = \frac {m_n}N$ such that $y_n \to x$. Now, $m_n \to Nx$. However, remember that $m_n$ are integers, and convergence can only happen in the integers if the sequence is eventually constant! (because different integers are at least $1$ distance from each other). So, $y_n$ is eventually constant. You can work it out from here. With this lemma, surely the proof is not too far away, is it?
HINTS: observe that there are a finite number of rationals $n/m&lt;1$ for any fixed $m$ (observe that Im not taking into account if $n$ and $m$ are coprime or not). Now remember that exists infinite rationals of the kind $n/p&lt;1$ where $p$ is prime. What happen with $m$ when you approximate a number $x$ with a rational $n/m$?
Undoing anonymous donations All the students in a class are planning to do a trip. Not all of the students can afford it, and it is considered shameful to reveal their poverty. So it is suggested that anyone can donate anonymously to a fund. If the fund becomes big enough to cover the trip, the trip happens. If not, the donators gets their money back, preserving anonymity of who donated and who didn't. Is this possible?
Sure. Create a computer system where students can enter a password of their choice and then donate as much money as they choose. If there is not enough total money at the end, students simply re-enter their passwords and the machine gives back that much money.
Yes it is possible if the paiement is done via Paypal / Online paiement. Bank transactions are validated and done only if the fund limit is reached. So, while the fund is not reached, nobody loose money and everybody stay anonymous.
Proving Riemann integrability using sequences of Riemann sums I am trying to prove the following: Suppose $ f:[a,b]\rightarrow\mathbb{R} $ is bounded. Then $ f $ is Riemann integrable if and only if for each sequence of marked partitions $\{P_n\}$ with $\{\mu(P_n)\}\rightarrow0$, the sequence $\{S(P_n,f)\}$ is convergent ,where $\mu(P)$ is the mesh of partition $P$ and $S(P,f)$ is the Riemann sum of $f$ over partition $P$. My attempt at a solution: Suppose for each sequence of marked partitions $\{P\}$ with $\{\mu(P_n)\}\rightarrow0,$ $ \{S(P_n,f)\}$ converges. Let $\epsilon&gt;0$ be given. Then there is an $A\in\mathbb{R}$ and $N\in\mathbb{N}$ such that when $n&gt;N$, there exists $\delta$ such that $\mu(P_n)&lt;\delta\implies |S(P_n,f)-A|&lt;\epsilon$ Then, by the theorem provided by leo below, the existence of $A$ implies that $f$ is Riemann integrable. Now suppose $f$ is integrable. Then given $\epsilon&gt;0$, there exists $A\in\mathbb{R}$ such that there exists $\delta$ for which $\mu(P)&lt;\delta\implies |S(P,f)-A|&lt;\epsilon, \forall P$. Then for each sequence each sequence of marked partitions $\{P\}$ with $\{\mu(P_n)\}\rightarrow0,$ $\mu(P_n)&lt;\delta$. Then, $|S(P_n,f)-A|&lt;\epsilon$ which means that $\{S(P_n,f)\}$ converges to A. Also by the theorem below, $A=\int f dt$
Your statement should be an easy corollary of the Du bois-Reymond and Darboux integration theorem. The proof of the theorem is rather cumbersome. So here is a reference: the proof can be found in Analysis by Its History by Hairer and Wanner. And by the way, you need to define what convergence as $\{\mu(P_n)\}\to 0$ and what marked partition mean.
Your statement should be an easy corollary of the Du bois-Reymond and Darboux integration theorem. The proof of the theorem is rather cumbersome. So here is a reference: the proof can be found in Analysis by Its History by Hairer and Wanner. And by the way, you need to define what convergence as $\{\mu(P_n)\}\to 0$ and what marked partition mean.
Composition series and chief series of $p$-group composition series and chief series of $p$-group. How to solve the following Problem? Thanks. Let $G$ be a group of order $p^n$, $p$ prime. Prove every chief factor and every composition factor is of order $p$. Is every composition series a chief series of $G$?
Hints: Let $\,G\;,\;\;|G|=p^n\;,\; p\;$ a prime, be a group, then use that $\,|Z(G)|&gt;1\;$ and a little induction to show that for any $\,0\le k\le n\;,\;\;G\;$ has a normal subgroup of order $\;p^k\;$ . This already answers (almost...) questions (1)-(2) . As for the last question: take the dihedral group $\;G:=\{s,t\;;\;s^2=t^4=1\;,\;sts=t^3\}\;$ of order 8, and check the series $$1\le\langle t^2\rangle\le\langle t\rangle\le G\;\ldots$$
Hint for the first question: If $G$ is a finite $p$-group and $G\neq1$, then $G$ contains a normal subgroup of order $p$. Hint for the second question: find a counterexample.
Evaluate $\sqrt{2x} \cdot \sqrt x$ So I have this problem: $\sqrt{2x} \cdot \sqrt x =\ldots$ I already have the answer which is $x\sqrt2$, but I just can't understand it. Someone that could help?
$$ \sqrt{2x}\sqrt{x}=\sqrt{2}\sqrt{x}\sqrt{x}=\sqrt{2}(\sqrt{x})^2=\sqrt{2}x=x\sqrt{2} $$ since the square and square-root cancel each other out.
you need to convince yourself that (for non-negative reals, say) $$ \sqrt{a}\sqrt{b} = \sqrt {ab} $$ note that this is a special case of the fact that for any nonzero real $\rho$ the map $x \to x^{\rho}$ is an automorphism of the abelian multiplicative group structure on $\mathbb{R^+}$
Calculating the matrix corresponding to linear map How does one go about converting a linear map in functional form to a matrix; for instance: For a fixed unit vector $\hat{n} \in \mathbb{R}^{3}$, define the map $f:\mathbb{R}^{3}\to\mathbb{R}^{3}$ by: $$f(\vec{v})=\vec{v}-2(\hat{n}\cdot\vec{v})\hat{n}$$ Work out the matrix $\mathbf{A}$ describing $f$ relative to the basis $\{\hat{\mathrm{i}},\hat{\mathrm{j}},\hat{\mathrm{k}}\}$ and show that $\mathbf{A}^{2}=\vec{1}$ The question then goes on to ask how to work out a matrix describing $f$ relative to a different basis set $\{\hat{u}_{1},\hat{u}_{2},\hat{n}\}$ and I'm not sure how to approach these problems.
Recall what it means to find the matrix $A$ of a linear transformation $T$ (with respect to a given basis $B$): it means to find $A$ such that $T([v]_B)=A[v]_B$. (Here, $[v]_B$ means the coordinates of $v$ with respect to the basis $B$.) In other words, $A$ is the matrix representation of $T$ if the "action" of $T$ on an input vector in $B$-coordinates is just multiplying that input vector on the left by the matrix $A$. Looking at this example might help. You have the linear transformation $f:\mathbb{R}^3\to\mathbb{R}^3$, $f(\mathbf{v})=\mathbf{v}-2(\mathbf{n}\cdot \mathbf{v})\mathbf{n}$ and you are working with the standard basis on $\mathbb{R}^3$. Let $\mathbf{v}=(a,b,c)$ and $\mathbf{n}=(n_1,n_2,n_3)$. Then \begin{align} f(\mathbf{v})&amp;=\mathbf{v}-2(\mathbf{n}\cdot \mathbf{v})\mathbf{n}\\ f((a,b,c))&amp;=(a,b,c)-2\left((n_1,n_2,n_3)\cdot(a,b,c)\right)(n_1,n_2,n_3)\\ &amp;=\begin{bmatrix} a-2an_1^2-2bn_1n_2-2cn_1n_3\\ b-2an_1n_2-2bn_2^2-2cn_2n_3\\ c-2an_1n_3-2bn_2n_3-2cn_3^2 \end{bmatrix}\\ &amp;=\underbrace{\begin{bmatrix} 1-2n_1^2 &amp; -2n_1n_2 &amp; -2n_1n_3\\ -2an_1n_2 &amp; 1-2n_2^2 &amp; -2n_2n_3\\ -2n_1n_3 &amp; -2n_2n_3 &amp; 1-2n_3^2 \end{bmatrix}}_A \begin{bmatrix} a\\ b\\ c\end{bmatrix}. \end{align}
In your case $n=3$, $v_1:=\hat{i}$, $v_2:=\hat{j}$, $v_3:=\hat{k}$, $T:=f$. Step 1: Evaluate each element of the basis $\{v_1,\ldots,v_n\}$ in the transformation. Step 2: expand each of these values in the basis $$T(v_i)=\alpha_{1i}v_1+\ldots\alpha_{ni}v_n.$$ Step 3: Put these coefficients as columns of a matrix, in the same order as the basis $\{v_1,v_2,\ldots,v_n\}$. To get $$\begin{bmatrix}\alpha_{11}&amp;\alpha_{12}&amp;\ldots&amp;\alpha_{1n}\\\alpha_{21}&amp;\alpha_{22}&amp;\ldots&amp;\alpha_{2n}\\\vdots&amp;\vdots&amp;\vdots&amp;\vdots\\\alpha_{n1}&amp;\alpha_{n2}&amp;\ldots&amp;\alpha_{nn}\end{bmatrix}$$ Doing the first column: $f(\hat{i})=\hat{i}-2(\hat{n}\cdot \hat{i})\hat{n}=\hat{i}-2\hat{n}(1)\hat{n}=(1-2\hat{n}(1)^2)\hat{i}-2\hat{n}(1)\hat{n}(2)\hat{j}-2\hat{n}(1)\hat{n}(2)\hat{k}$. So, the first column of the matrix is $\begin{bmatrix}1-2\hat{n}(1)^2\\-2\hat{n}(1)\hat{n}(2)\\-2\hat{n}(1)\hat{n}(2)\end{bmatrix}$, where $\hat{n}(1)$, $\hat{n}(2)$, $\hat{n}(3)$, are the components of $\hat{n}$ in the basis $\hat{i}$, $\hat{j}$, $\hat{k}$, which I am assuming to be the standard basis of $\mathbb{R}^3$.
Why are fractions the same as divisions? Ever since I learned about fractions in Elementary School, I've known how to work with them. The problem is, although I remember the mathematical rules, I don't remember how I assimilated the equality between fractions and divisions when I was younger. This is really troubling to my mind because it feels like I partly understand this concept. I know that the denominator represents the amount of parts a whole was divided into and that the numerator represents the amount of parts I'm working with. Let's say, I want to divide 3 by 4. The result of this division is 3/4. But, in this case, I'm dividing 3 wholes by 4. Doesn't this come into conflict with the definition of the denominator, which is the amount of parts in which 1 whole is divided into? This confusion is really bugging me and I'd really appreciate if someone could clear it up. Thanks for the attention.
You are right that there are two things going on here, and it doesn't seem obvious that they are the same. On the one hand you have $3$ units and you take a fourth of that (that's dividing $3$ by $4$). On the other hand you have a unit, cut it in four parts and keep only three of these parts (that's the fraction "three fourths"). The reason why they are the same is that you can achieve the first operation by cutting each of your three units into four parts, and take one part from each. By doing that, you took a fourth of your three units, and at the same time what you have in your hands is three "fourths of a unit".
Three over four is literally another way of writing "thre divided by four", which is 0.75. When you see three over four "on its own" that means "0.75 'of one' ", which is of course 0.75. Three over four, times four is of course three. Three over four times a hundred is of course 75. Three over four times a New York pizza is of course six slices of pizza. Three over four of your income is what the tax man takes. And so on. If you see three over four "on it's own", it's just a convention that it means "times one". " I'm dividing 3 wholes by 4. Doesn't this come into conflict with the definition of the denominator, which is the amount of parts in which 1 whole is divided into?..." Where you wrote "1 whole" just above - near the end - you meant "the whole thing"... That is to say "the whole thing in question"... This is completely non-mysterious: in English it's normal to say "one whole thing" where the "one whole thing" might be "a swimming pool", "the USA", "1", "88", "the color blue," or whatever. (The English phrase "one whole thing" happens to have a the word "one" in it. Much as one may say, "One must carefully brush one's hair before meeting one's grandmother." It has no connection to "1.0".) The "whole thing" here is indeed ..... "3". That's all there is to it. Just as you say, the denominator ("4.0" here) is what you divide in to the "whole thing". Here the "whole thing" is 3. That's all you're seeing there!
Convergence of $\sum_{n=1}^\infty \frac{1}{n}-\ln(1+ \frac{1}{n})$ I'm having trouble to prove/disprove the Series $$\sum_{n=1}^\infty\left( \frac{1}{n}-\ln(1+ \frac{1}{n})\right)$$ being convergent or divergent. I tried the ratio test but it seems to be inconclusive. I think I should use here some reference for the comparison test but I couldn't find any matching Series. Also tried to make a common denominator and received $\sum_{n=1}^\infty \frac{1-n\ln(1+ \frac{1}{n})}{n}$ so I have to find something smaller then $n\ln(1+ \frac{1}{n})$, and I couldn't figure out the matching alternative. Would appreciate your advice
Note that $\ln (1+x)=x-\frac12x^2+O(x^3)$. Hence $\frac1n-\ln(1+\frac 1n)=\frac1{2n^2}+O(n^{-3})$ and so for $n$ large enough we have $0&lt;\frac1n-\ln(1+\frac 1n)&lt;\frac1{n^2}$. In case you are "afraid" of the big-O or did not learn about Taylor expansion yet, it is sufficient to show the following: $$ x-x^2\le \ln(1+x)\le x\qquad\text{for }x\ge0.$$ The right hand side is an immediate consequence of $e^x\ge 1+x$ (the mother of all inequalities for the exponential). The left hand side follows from $e^{x^2-x}\ge 1+x^2-x&gt;0$ so that $$e^{x-x^2}=\frac1{e^{x^2-x}}\le \frac1{1-x+x^2} =\frac{1+x}{1+x^3}\le 1+x$$
Just use the limit comparison test . Use the series $\sum_{n=1}^\infty(1/n)$. The limit comes out to be 1 and hence both the series converge or diverge together.
Average number of $5$-card draws before all $52$ cards in a deck are drawn. So an interesting question was brought to me today and I'm not sure how to formulate the equation to answer it. A person draws $5$ cards from a deck, writes the cards down, puts the cards back in the deck and reshuffles. What is the average number of draws they have to perform before all $52$ cards are drawn? What about before before $45$ out of the $52$ cards are drawn? Finally, how many draws if red cards were twice as likely to be drawn as black cards? If you could show the equation for each it would be much appreciated.
This sounds like the coupon collector problem. https://en.wikipedia.org/wiki/Coupon_collector%27s_problem While the 5 card draw does make it easier, for a single card hand it will take 236 draws on average to collect all 52 cards. So as an upper bound for the 5 card version I would say 48 hands.
This sounds like the coupon collector problem. https://en.wikipedia.org/wiki/Coupon_collector%27s_problem While the 5 card draw does make it easier, for a single card hand it will take 236 draws on average to collect all 52 cards. So as an upper bound for the 5 card version I would say 48 hands.
Very simple predicate logic deduction question I am very new to logic and currently taking a course about it but unfortunately it's a weekend now so I can't get the answers I need! Basically I am wondering a very basic thing. I want to prove something with natural deduction and let's say I have this premise: Ok so let's start the question, here's an example but not a full example, just a short bit: $$\forall x\forall y\big(A(x)\to B(y)\big)\qquad\text{(premise)}$$ So we got these two variables $x$ and $y$ and then just get rid of the quantifiers and replace the variables with two arbitrary constants just like the rule says: $$A(c)\to B(d)$$ Okay now for the question... Let's also say that I have another premise, or perhaps just an assumption even, that says: $$A(d)$$ Would it be usable with $A(c)$? Like, could I use the modus ponens rule like this: $$A(d)\quad A(c) \to B(d)$$ The two constants $d$ and $c$ are different. I'mma guess the answer to this question is in fact "no" but it's something that keeps bothering me (because I always go like "Hmm but I can do this to appl--Oh... Guess not...)" and I just want to 100% make sure it's not possible since I am absolutely horrible at this subject!
In natural deduction, you may deduce $P(t)$ from $\forall x\: P(x)$ for arbitrary terms $t$. Thus, in your case, you can deduce $A(d) \rightarrow B(d)$ from $\forall x\forall y\: (A(x)\rightarrow B(y))$, and then use the premise $A(d)$ to deduce $B(d)$. You can generalize that by instead deducing $A(d) \rightarrow B(t)$ from $\forall x\forall y\: (A(x)\rightarrow B(y))$ ($t$ is again some arbitrary term), and then use $A(d)$ to deduce $B(t)$.
Couldn't you just get rid of the quantifiers with $A(d) \to B(d)$ as well? I mean, it's true for ALL $x$ and $y$... but presumably $A(c)$ and $A(d)$ are different, so you couldn't just use what you wrote. Luckily your initial premise is very, very flexible!
Remainder of Polynomial Division of $(x^2 + x +1)^n$ by $x^2 - x +1$ I am trying to solve the following problem: Given $n \in \mathbb{N}$, find the remainder upon division of $(x^2 + x +1)^n$ by $x^2 - x +1$ the given hint to the problem is: &quot;Compute $(x^2 + x +1)^n$ by writing $x^2 + x +1 = (x^2 - x +1) + 2x$. Then, use the uniqueness part of the division algorithm.&quot; If I take $a = x^2 - x +1$ I have $$(x^2 + x +1)^n = (a + 2x)^n = a^n + \binom{n}{1}a^{n-1} 2x+ \binom{n}{2}a^{n-2} (2x)^2 + \dots + (2x)^n$$ but how do I proceed further?
Since every term but the last is divisible by $a$, so we only need to deal with $(2x)^n \bmod \left(x^2-x+1\right)$. Let's start from small ones, and try to find patterns: $$ \begin{aligned} ((2 x)^1 &amp;\bmod \left(x^2-x+1\right))=\color{gray}{2x} \\ ((2 x)^2&amp;\bmod \left(x^2-x+1\right)) =\color{red}{-4+4 x} \\ ((2 x)^3 &amp;\bmod \left(x^2-x+1\right)) = \color{blue}{-8}\\ ((2 x)^4 &amp;\bmod \left(x^2-x+1\right)) = \color{green}{-16x}\\ ((2 x)^5 &amp;\bmod \left(x^2-x+1\right)) = 32-32x\\ ((2 x)^6 &amp;\bmod \left(x^2-x+1\right)) = 64\\ ((2 x)^7 &amp;\bmod \left(x^2-x+1\right)) =\color{gray}{128x}\\ ((2 x)^8 &amp;\bmod \left(x^2-x+1\right)) = \color{red}{-256+256x}\\ ((2 x)^9 &amp;\bmod \left(x^2-x+1\right)) = \color{blue}{-512} \\ ((2 x)^{10} &amp;\bmod \left(x^2-x+1\right)) =\color{green}{-1024 x} \end{aligned} $$ What do you find? Hint: Divide into groups by $n \bmod 6$. Prove by induction separately.
Great job so far! Notice that all terms of the form $\binom{n}{i}a^{n-i}$ are divisible by $a$ if and only if $i&lt;n$. This means that the remainder is simply $(2x)^n$.
A person went to market to buy 1.5 kg of dried peas having 20% water content. He went home and soaked them for some time and the water ... A person went to market to buy 1.5 kg of dried peas having 20% water content. He went home and soaked them for some time and the water content in the peas become 60%. Find the final weight of the soaked peas . My approach : Since 1.5 kg contain 20% water , it means 1.5 x 20/100 = 300gm of water content. Now I am not getting idea how to use this or may be i m wrong here.. please guide will be of great help. Thanks.
1.5 kg leads to 300 grams of water being 20%. So, we have 1.2 kg of dried material. By soaking we get this 1.2 kg of dried material to equate to 40% of our total mass ( 100% - 60% water weight). $$1.2 \text{kg} /40\% =3 \text{kg}$$ So, that means we have 3 kg of mass total. This of course assumes the percent water content is by mass and not by volume.
Direct Proportionality implies that increasing mass $m$ is linked to increasing water content $w$, $w \sim m$ or $w = c ~ \cdot ~m$, where $c$ is a constant (direct proportionality factor). We are given $$(w_1;m_1)=(0.2;1.5)$$ From what's given, we can find $$c = w_1/m_1 = 2/15$$ If we like to find $m_2$ in $(0.6;m_2)$, then we use $c$ and the equation $ w = c ~ \cdot ~m$ to get $$m_2 = w_2/c=0.6/(2/15)=4.5 ~\text{kg}.$$ EDIT: Apparaently it's not that easy, because $4.5 ~\text{kg}$ is too much weight? So, we forget about direct proportionality, it's of no help in this case... Let's check the chemistry and mechanics of the problem. The water content has many different definitions and it can be defined as the gravimetric water content (, which also has many types of sub-definitions depending on the application and on the point of time of investigation of the water content) $$u'=m_{\text{water}}/m_{\text{wet peas}}.$$ If $m_{\text{wet peas}}=1.5 ~\text{kg}$ of dried peas have $u'=20~\%$ water content, then we use above formula: $$m_{\text{water}}=u' \cdot m_{\text{wet peas}} = 20~\% \cdot 1.5~\text{kg}= 0.3~\text{kg}.$$ Now the water mass is just the difference of the masses of the wet peas and the dried peas: $$m_{\text{water}}=m_{\text{wet peas}}-m_{\text{dried peas}}.$$ Solve above equation for $m_{\text{dried peas}}$ and plug in the numbers for $m_{\text{wet peas}}$ and $m_{\text{water}}$: $$m_{\text{dried peas}} = m_{\text{wet peas}} - m_{\text{water}}= 1.5~\text{kg}-0.3~\text{kg}=1.2~\text{kg}.$$ If now the water content becomes $u'=60\%$, then we still have $m_{\text{dried peas}} =1.2~\text{kg}$, but: $$ m_{\text{wet peas}} = m_{\text{dried peas}}/(1-u') =\color{red}{3~\text{kg}} .$$ This result is achieved using the definition of the gravimetric water content before drying, which is: $$ u'=m_{\text{water}}/m_{\text{wet peas}}=(m_{\text{wet peas}}-m_{\text{dried peas}})/m_{\text{wet peas}}.$$ EDIT The easiest and quickest method to solve this problem is by embracing the condition, that the dry mass of the peas before soaking ($=80\% \cdot 1.5~ \text{kg}$) and the dry mass of the peas after soaking ($=40\% \cdot m_{\text{wet peas}}~ \text{kg}$) still didn't change. This condition leads immediately to: $$80\% \cdot 1.5=40\% \cdot m_{\text{wet peas}} \Rightarrow m_{\text{wet peas}} = (80\% \cdot 1.5)/40\% = 3~\text{kg}.$$ $$$$
how to solve $x^{113}\equiv 2 \pmod{143}$ I need to solve $x^{113} \equiv 2 \pmod{143}$ $$143 = 13 \times 11$$ I know that it equals to $x^{113}\equiv 2 \pmod{13}$ and $x^{113}\equiv 2 \pmod{11}$ By Fermat I got 1) $x^{5} \equiv 2 \pmod{13}$ 2) $x^{3} \equiv 2 \pmod{11}$ Now I'm stuck..
By the Chinese remainder theorem, you have to solve first $$\begin{cases}x^{113}\equiv2\mod13\\x^{113}\equiv2\mod11\end{cases}$$ Now Little Fermat says for any $x\not\equiv 0\mod 13\enspace(\text{resp. }11)$, one has $x^{12}\equiv 1\mod 13$, resp. $x^{10}\equiv 1\mod 11$. Hence the system of congruences is equivalent to $$\begin{cases}x^{113\bmod 12}\equiv x^5\equiv2\mod13,\\x^{113\bmod 10}\equiv x^3\equiv2\mod11. \end{cases}$$ Let's examine the different possibilities for $x$. We can eliminate the values $0$ and $1$. Hence, modulo $13$, let's draw a table, using the fast exponentiation algorithm: $$\begin{matrix} x&amp; \pm 2&amp;\pm 3&amp;\pm4 &amp;\pm 5 &amp; \pm 6\\ \hline x^2&amp;4&amp;-4&amp;3&amp;-1&amp;-3\\ x^4&amp;3&amp;3&amp;-4&amp;1&amp;-4\\ \hline x^5&amp;\pm6&amp;\pm6&amp;\mp3&amp;\pm5&amp;\pm 2 \end{matrix}$$ Thus the solution is $\;\color{red}{x\equiv6\mod 13}$. modulo $11$, we have $$\begin{matrix} x&amp; \pm 2&amp;\pm 3&amp;\pm4 &amp;\pm 5\\ \hline x^2&amp;4&amp;-2&amp;5&amp;-1\\ \hline x^3&amp;\mp3&amp;\pm5&amp;\mp 2&amp;\mp5 \end{matrix}$$ Here the solution is $\;\color{red}{x\equiv-4\mod 11}$. To solve this system of simultaneous congruences, we need a Bézout's relation between $13$ and $11$: $$6\cdot 11-5\cdot13=1.$$ The solutions of the system of congruences is $$x\equiv\color{red}{6}\cdot 6\cdot 11-(\color{red}{-4})\cdot5\cdot 13\equiv656\equiv \color{red}{84 \mod 143}.$$
The first congruence tells you $x \equiv 6 \pmod {13}$ and the second tells you $x \equiv 7 \pmod {11}$, now apply the Chinese remainder theorem to get your answer.
Is the empty set a subset of itself? Sorry but I don't think I can know, since it's a definition. Please tell me. I don't think that $0=\emptyset\,$ since I distinguish between empty set and the value $0$. Do all sets, even the empty set, have infitinite emptiness e.g. do all sets including the empty set contain infinitely many empty sets?
There is only one empty set. It is a subset of every set, including itself. Each set only includes it once as a subset, not an infinite number of times.
An empty subset can be selected from any set, including the empty set itself, using selection criteria that cannot be satisfied, e.g. selecting those elements $x$ such that $x\neq x$. See my formal proof (in DC Proof format) at: http://dcproof.com/ExistenseOfNullSet.htm There is only one empty set. If $\phi_1$ and $\phi_2$ are empty sets, then $\ \phi_1 = \phi_2$. See my formal proof (in DC Proof format) at: http://dcproof.com/UniquenessOfNullSet.htm Edit: For any set $X$ we can select an empty subset $S$ such that: $\forall a:[a\in S\iff a\in X\land a\neq a]$ or $\forall a:[a\in S\iff a\in X\land a\notin X]$
Prove that the general linear group, GL(V) is a group If V is a vector space and GL(V) is the set of all linear transformations from V to V that are bijections, prove that GL(V) is a group with operation composition. I am out of practice with algebra, and perhaps this is too abstract for me, but isn't the fact that the linear transformations are bijections, isn't associativity and inverse proved? But to prove the existence of an identity element?
Let $T \in GL(V)$. We wish to show that $S := T^{-1}$ is a linear map. Proof: Let $w, w' \in V$. Then, since $T$ is a bijection, there exists $v, v' \in V$ such that $w = T(v)$ and $w' = T(v')$. Then, $S(w) * S(w') = S(T(v)) * S(T(v')) = v*v' = S(T(v*v')) = S(T(v)*T(v')) = S(w*w')$.
Let $T \in GL(V)$. We wish to show that $S := T^{-1}$ is a linear map. Proof: Let $w, w' \in V$. Then, since $T$ is a bijection, there exists $v, v' \in V$ such that $w = T(v)$ and $w' = T(v')$. Then, $S(w) * S(w') = S(T(v)) * S(T(v')) = v*v' = S(T(v*v')) = S(T(v)*T(v')) = S(w*w')$.
Why don’t numbers with a difference of 2 have any common factors beside 1 and possibly 2? For example, if x is odd, that means x-1 and x+1 are both even. And from what I’ve seen, because x+1 and x-1 have a difference of 2, they have no other common factors beside 1 and because they are both even, 2. If x is even, then x+1 and x-1 are both odd and therefore have only 1 as a common factor. My questions are: Is this true in all cases? If so, why is this true?
Yes, it is true in all cases. Hint: if $m$ is a common factor of $a$ and $b$, then $m$ divides $a-b$.
If $x,y$ are numbers with $d$ as common factor, then $|x-y|$ is also a multiple of $d$. Hence either $x=y$ or $|x-y|\ge d$. Therefore, numbers with $|x-y|=2$ cannot have a common factor $d$ that is $&gt;2$. Two even such numbers certainly have at least and hence exactly $2$ as common factor. Two odd numbers cannot have an even common factor, hence $1$.
$\lim _{x\to 0}\left(\frac{e^x-1}{\:x^3}\right)$ I apply L'Hopital thrice and I get 1/6 but here they've stopped at infinity. What is the correct answer?
Note that L’Hopital rule is used only for limits of $\frac00$ or $\frac{\infty}{\infty}$ form. Right now, on substituting $x=0$, we get a $\frac00$ form. But, if we differentiate it once, we get: $$\lim_{x \to 0} \frac{e^x}{3x^2}$$ which is not of $\frac00$ form. So, we cannot use L’Hopital rule here. Evaluating the above limit, the answer is obviously $\infty$.
I don't know how you used L’Hopital rule but look(assume I'm taking the limit all the time): $$\frac{e^x-1}{x^3}=\frac{e^x}{3x^2}$$ This is a limit in the form of $\frac1{0}$, which in this case is determinate form, it goes to $+\infty$ because $\lim_{x\to0}x^2$ goes to $0$ from the positive direction The answer is infinity
Can we treat logic mathematically without using logic? I'm reading Kleene's introduction to logic and in the beginning he mentions something that I have thought about for a while. The question is how can we treat logic mathematically without using logic in the treatment? He mentions that in order to deal with this what we do is that we separate the logic we are studying from the logic we are using to study it (which is the object language and metalanguage, respectively). How does this answer the question? Aren't we still using logic to build logic? And I have a feeling that the answer is to some extent that we use simpler logics to build more complex ones, but then don't we run into a paradox of what 's the simplest logic, for won't any system of logic no matter how simple be a metalanguage for a simpler language?
We use logic to study logic, not to create logic. Our study is usually not intended to justify some logic but rather to understand how it works. For example, we might try to prove that, whenever a conclusion $c$ follows from an infinite set $H$ of hypotheses then $c$ already follows from a finite subset of $H$. Many logical systems have this finiteness property; many others do not. And that's quite independent of the logic that we use in studying this property and trying to prove or disprove it for one or another logical system. Here's an analogy: Suppose a biologist is writing a paper about the origin of trees. He could use a wooden pencil to write the paper. That pencil was made using wood from trees, so its existence presupposes that the origin of trees actually happened. Nevertheless, there is nothing circular here. The pencil that is being used probably consists of wood quite different from that in prehistoric trees. And even if it wasn't different, there's no problem with using the pencil to describe those ancient trees. Similarly, there's no problem using ordinary reasoning, also called logic, to describe and analyze the process of reasoning.
You can use arithmetics to do logic. true is 1 false is 0 $a \cdot b \Leftrightarrow$ logical and $1-a \Leftrightarrow$ logical not $a+b&gt;0 \Leftrightarrow$ logical inclusive or $a+b=1 \Leftrightarrow$ logical exclusive or
Proving the expected value of the square root of X is less than the square root of the expected value of X How do I show that $E(\sqrt{X}) \leq \sqrt{E(X)}$ for a positive random variable $X$? I may be intended to use the Cauchy-Schwarz Inequality, $[E(XY)]^2 \leq E(X^2)E(Y^2)$, but I'm not sure how.
The Cauchy-Schwarz inequality tells us that if $A=\sqrt{X}$ and $B=1$, then $$E[\sqrt{X}]^2=(E[AB])^2\leq E[A^2]E[B^2]=E[X]$$ so $$E[\sqrt{X}]\leq \sqrt{E[X]}$$
Let us assume a variable x and its probability p. Then we calculate (x^2)p - (x^2)p^2. Since p is a fraction it is greater than p^2. So (x^2){p - p^2} is positive. Summing over all variables we get by definition E[X^2] - (E[X])^2 >= 0 or E[X^2] >= (E[X])^2. Taking square root and putting X = X^0•5 gives us the result. Since probability is a fraction and becomes very small as number of variables are large summing over (x^2)p^2 can be approximated as summation of product of x and p whole square.
How many squares are in this image? Is there a method to check? In this image I have counted 14 but others say 18. Is there a method to check exactly?
[I will give two methods. The SECOND one is the better one.] Method? Well, sort of, you can look at each "atom" piece and count how many squares that is the upper right hand corner of. A)Top right square -> top right square; engulf the the rectangles for a 3 by 3, the whole 4x4 = 3. B) top middle rectangle -> 2x2 square; 3x3 square = 2. C) top left square -> top left square = 1. D) middle side rectangle -> 2x2 square; 3x3 = 2. E) next square in the (2,2) spot -> 1x1;2x2;3x3 = 3 F) (3,2) spot ->1x1; 2x2 = 2 G) (4,2) spot -> 1x1 = 1 H) (3,2) spot -> 1x1; 2x2 = 2 I) (3,3) spot -> 1x1 = 1 J) (4,1) spot -> 1x1 = 1 K) low middle rectangle = 0 L) (4,4) spot -> 1x1 = 1 So total: 18 ======= This i a simplification of simply figuring out the squares in a complete grid and subtracting the one that rectangles make impossible. A 4x4 grid will have: 16 1x1 squares; 9 2x2 squares (as there are 3 squares in each of the top 3 rows that can be an upper right hand corner of a 3x3 square), 4 3x3 squares, and 1 4x4 square. So an n x n grid will have $\sum k^2$ total squares. In this case 16 + 9 + 4 + 1 = 30. The first top rectangle eliminates 2 1x1 squares and 1 2x2 square. So only 27 possible. The left side rectangle eliminates 2 1x1 squares and 1 2x2 square. So only 24 possible. The right side triangle eliminates 2 1x1 squares (1 2x2 that was already eliminated) and a 3x3. So only 21 possible. The bottom rectangle eliminates 2 1x1 squares, 2 2x2 squares. So only 18 possible squares.
19 are yall skipping the big one on the outside? Can't be an even number as only one outside square. I counted 19
Inverse Mellin transform of $\Gamma(s)$ $\zeta(s/2)$ How would you find an approximation of the Inverse Mellin transform of $\Gamma(s)$ $\zeta(s/2)$ near $x=0$?
$\mathcal{M}^{-1}\left\{\Gamma(s)\zeta\left(\dfrac{s}{2}\right)\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{\Gamma(s)}{\Gamma\left(\dfrac{s}{2}\right)}\int_0^\infty\dfrac{x^{\frac{s}{2}-1}}{e^x-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^{s-1}\Gamma\left(\dfrac{s}{2}\right)\Gamma\left(\dfrac{s+1}{2}\right)}{\sqrt\pi~\Gamma\left(\dfrac{s}{2}\right)}\int_0^\infty\dfrac{x^{s-2}}{e^{x^2}-1}~d(x^2)\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^s}{\sqrt\pi}\int_0^\infty x^\frac{s-1}{2}e^{-x}~dx\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^s}{\sqrt\pi}\int_0^\infty x^{s-1}e^{-x^2}~d(x^2)\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^{s+1}}{\sqrt\pi}\int_0^\infty x^{s-1}e^{-x^2}~dx\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^{s+1}}{\sqrt\pi}\int_0^\infty\dfrac{x^{s-1}}{2^{s-1}}e^{-\frac{x^2}{4}}~d\left(\dfrac{x}{2}\right)\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2}{\sqrt\pi}\int_0^\infty x^se^{-\frac{x^2}{4}}~dx\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\dfrac{2}{\sqrt\pi}\int_0^\infty\dfrac{xe^{-\frac{x^2}{4t^2}}}{t^2(e^{t^2}-1)}~dt$ (according to http://eqworld.ipmnet.ru/en/auxiliary/inttrans/mellin.pdf) $=-\dfrac{2x}{\sqrt\pi}\int_0^\infty\dfrac{e^{-\frac{x^2}{4t^2}}}{e^{t^2}-1}~d\left(\dfrac{1}{t}\right)$ $=\dfrac{2x}{\sqrt\pi}\int_0^\infty\dfrac{e^{-\frac{x^2t^2}{4}}}{e^\frac{1}{t^2}-1}~dt$ $=\dfrac{2x}{\sqrt\pi}\int_0^\infty\dfrac{e^{-\frac{x^2t^2}{4}-\frac{1}{t^2}}}{1-e^{-\frac{1}{t^2}}}~dt$ $=\dfrac{2x}{\sqrt\pi}\int_0^\infty\sum\limits_{n=0}^\infty e^{-\frac{x^2t^2}{4}-\frac{n+1}{t^2}}~dt$ $=2\sum\limits_{n=0}^\infty e^{-x\sqrt{n+1}}$ (according to How to evaluate $\int_{0}^{+\infty}\exp(-ax^2-\frac b{x^2})\,dx$ for $a,b&gt;0$) $=2\sum\limits_{n=1}^\infty e^{-x\sqrt n}$
$\mathcal{M}^{-1}\left\{\Gamma(s)\zeta\left(\dfrac{s}{2}\right)\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{\Gamma(s)}{\Gamma\left(\dfrac{s}{2}\right)}\int_0^\infty\dfrac{x^{\frac{s}{2}-1}}{e^x-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^{s-1}\Gamma\left(\dfrac{s}{2}\right)\Gamma\left(\dfrac{s+1}{2}\right)}{\sqrt\pi~\Gamma\left(\dfrac{s}{2}\right)}\int_0^\infty\dfrac{x^{s-2}}{e^{x^2}-1}~d(x^2)\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^s}{\sqrt\pi}\int_0^\infty x^\frac{s-1}{2}e^{-x}~dx\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^s}{\sqrt\pi}\int_0^\infty x^{s-1}e^{-x^2}~d(x^2)\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^{s+1}}{\sqrt\pi}\int_0^\infty x^{s-1}e^{-x^2}~dx\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2^{s+1}}{\sqrt\pi}\int_0^\infty\dfrac{x^{s-1}}{2^{s-1}}e^{-\frac{x^2}{4}}~d\left(\dfrac{x}{2}\right)\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\mathcal{M}^{-1}\left\{\dfrac{2}{\sqrt\pi}\int_0^\infty x^se^{-\frac{x^2}{4}}~dx\int_0^\infty\dfrac{x^{s-1}}{e^{x^2}-1}~dx\right\}$ $=\dfrac{2}{\sqrt\pi}\int_0^\infty\dfrac{xe^{-\frac{x^2}{4t^2}}}{t^2(e^{t^2}-1)}~dt$ (according to http://eqworld.ipmnet.ru/en/auxiliary/inttrans/mellin.pdf) $=-\dfrac{2x}{\sqrt\pi}\int_0^\infty\dfrac{e^{-\frac{x^2}{4t^2}}}{e^{t^2}-1}~d\left(\dfrac{1}{t}\right)$ $=\dfrac{2x}{\sqrt\pi}\int_0^\infty\dfrac{e^{-\frac{x^2t^2}{4}}}{e^\frac{1}{t^2}-1}~dt$ $=\dfrac{2x}{\sqrt\pi}\int_0^\infty\dfrac{e^{-\frac{x^2t^2}{4}-\frac{1}{t^2}}}{1-e^{-\frac{1}{t^2}}}~dt$ $=\dfrac{2x}{\sqrt\pi}\int_0^\infty\sum\limits_{n=0}^\infty e^{-\frac{x^2t^2}{4}-\frac{n+1}{t^2}}~dt$ $=2\sum\limits_{n=0}^\infty e^{-x\sqrt{n+1}}$ (according to How to evaluate $\int_{0}^{+\infty}\exp(-ax^2-\frac b{x^2})\,dx$ for $a,b&gt;0$) $=2\sum\limits_{n=1}^\infty e^{-x\sqrt n}$
How to find the number of perfect matchings in complete graphs? In wikipedia FKT algorithm is given for planar graphs. Not anything for complete graphs. I need to find the number of perfect matchings in complete graph of six vertices.
It's just the number of ways of partitioning the six vertices into three sets of two vertices each, right? So that's 15; vertex 1 can go with any of the 5 others, then choose one of the 4 remaining, it can go with any of three others, then there are no more choices to make. $5\times3=15$.
Gerry was correct (sort of) in his first statement, saying that it is the number of ways to partition the six vertices into three sets of two. However, the answer of number of perfect matching is not 15, it is 5. In fact, for any even complete graph G, G can be decomposed into n-1 perfect matchings. Try it for n=2,4,6 and you will see the pattern. Also, you can think of it this way: the number of edges in a complete graph is [(n)(n-1)]/2, and the number of edges per matching is n/2. What do you have left for the number of matchings? n-1.
GCD to LCM of multiple numbers If I know the GCD of 20 numbers, and know the 20 numbers, is there a formula such that I input the 20 numbers and input their GCD, and it outputs their LCM? I know that $$\frac{\left| a\cdot b\right|}{\gcd(a,b)} = \text{lcm}(a,b).$$ So is it$$\frac{\left| a\cdot b\cdot c\cdot d\cdot e\cdot f\right|}{\gcd(a,b,c,d,e,f)}?$$If not, what is it?
There can be no formula that computes $\text{lcm}(a,b,c)$ using only the values of $abc$ and $\gcd(a,b,c)$ as input: that's because $(a,b,c) = (1,2,2)$ and $(a,b,c) = (1,1,4)$ both have $abc=4$, $\gcd(a,b,c)=1$, but they don't have the same lcm. However, there is a straightforward generalization of the $2$-variable formula. For instance, $$\text{lcm}(a,b,c,d) = \frac{abcd}{\gcd(abc,abd,acd,bcd)}.$$ The correct gcd to take is not of the individual terms $a,b,c,d$ but the products of all the complementary terms (which looks the same in the two-variable case).
LCM can be calculated using: $\text{GCD}(a,b)\cdot \text{LCM}(a,b)=ab$ and $\text{LCM}(a,b,c,d,...n)=\text{LCM}(\text{LCM}(a,b),c,d,...,n)$ Example $\text{LCM}(2,4,5,7)=\text{LCM}(4,5,7)=\text{LCM}(20,7)=140$ or more concisely $$ \begin{align} 2&amp;,4,5,7 \\ 4&amp;,5,7 \\ 20&amp;,7 \\ 140&amp; \end{align} $$ http://mathhelpforum.com/peer-math-review/229498-lcm-b-c-abc.html
Area between $y=x^4$ and $y=x$ The problem I'm having some trouble solving is this: calculate the area between $y=x^4$ and $y=x$. The points are $a = 0$ and $b = 1$, but the definite integral is negative. What am I doing wrong here?
You have $\displaystyle A=\int_0^1(x-x^4)dx=\left[\frac{x^2}{2}-\frac{x^5}{5}\right]_{0}^1=\frac{3}{10}$ or, equivalently, $\displaystyle A=\int_0^1(y^{\frac{1}{4}}-y)dy=\left[\frac{4}{5}y^{\frac{5}{4}}-\frac{y^2}{2}\right]_0^1=\frac{3}{10}$.
$\int_{0}^{1}\int_{x^4}^{x}dy$ would give you your answer. Or in simpler form $\int_{0}^{1}(x-x^4)dy$ I would have even attached the graph of the two functions but I don't know how to. NOTE: In $[0,1]$, $x\geq x^4$ Always draw the graphs of the corresponding functions before solving such questions. Helps a lot. :)
Stronger than Nesbitt inequality For $x,y,z &gt;0$, prove that $$\frac{x}{y+z}+\frac{y}{z+x}+\frac{z}{x+y} \geqslant \sqrt{\frac94+\frac32 \cdot \frac{(y-z)^2}{xy+yz+zx}}$$ Observation: This inequality is stronger than the famous Nesbitt's Inequality $$\frac{x}{y+z}+\frac{y}{z+x}+\frac{z}{x+y} \geqslant \frac32 $$ for positive $x,y,z$ We have three variables but the symmetry holds only for two variables $y,z$, resulting in a very difficult inequality. Brute force and Largrange Multiplier are too complicated. The constant $\frac32$ is closed to the best constant. Thus, this inequality is very sharp, simple AM-GM estimation did not work. Update: As point out by Michael Rozenberg, this inequality is still unsolved
Here's a proof. Using Lagrange's identity leads to $$\sum_{cyc}\frac{x}{y+z}=\frac32+\frac12\sum_{cyc}\frac{(x-y)^2}{(x+z)(y+z)} \qquad (*)$$ We get this by letting $a = \sqrt{x+y}$, $b = \sqrt{y+z}$, $c = \sqrt{z+x}$, $d = \frac{1}{\sqrt{x+y}}$, $e = \frac{1}{\sqrt{y+z}}$, $f = \frac{1}{\sqrt{z+x}}$, and by Lagrange's identity, $$(a^2+b^2+c^2)(d^2+e^2+f^2) = (ad+be+cf)^2 + (ae-bd)^2+(af-cd)^2+(bf-ce)^2$$ which gives $$2 (x+y+z)(\sum_{cyc}\frac{1}{y+z}) = 9 + \sum_{cyc} \left(\frac{\sqrt{x+y}}{\sqrt{y+z}} - \frac{\sqrt{y+z}}{\sqrt{x+y}}\right)^2$$ or $$3 +\sum_{cyc}\frac{x}{y+z} = \frac92 + \frac12\sum_{cyc} \frac{\left({x+y} - (y+z)\right)^2}{({y+z})(x+y)} $$ which is the desired equation $(*)$. Squaring both sides of the inequality then gives $$ \tag{1} \frac16 \left[ \sum_{cyc}\frac{(x-y)^2}{(x+z)(y+z)} \right ]^2 + \sum_{cyc}\frac{(x-y)^2}{(x+z)(y+z)} \geq \frac{(y-z)^2}{xy + yz + xz} $$ We will follow two paths for separate cases. Path 1: Omitting the square term it suffices to prove $$\sum_{cyc}\frac{(x-y)^2}{(x+z)(y+z)} \geqslant \frac{(y-z)^2}{xy + yz + xz}$$ Clearing denominators, we obtain $$ (x-y)^2 (x+y) + (y-z)^2 (y+z) + (z-x)^2 (z+x) \geq \frac{(y-z)^2}{xy + yz + xz} (x+y) (y+z) (z+x) $$ Using $(x+y)(y+z)(x+z) = (x+y+z)(xy + yz+ xz) - x y z$ it suffices to show $$ (x-y)^2 (x+y) + (y-z)^2 (y+z) + (z-x)^2 (z+x) \geq {(y-z)^2} (x+y+z) $$ or $$ (x-y)^2 (x+y) + (z-x)^2 (z+x) \geq (y-z)^2 x $$ Since $$ (y-z)^2 = (y-x + x- z)^2 = (y-x)^2 + (x- z)^2 + 2 (y-x)(x-z) $$ this translates into $$ (y-x)^2 y + (x-z)^2 z \geq 2 x(y-x)(x-z) $$ For the two cases $y\geq x ; z\geq x $ and $y\leq x ; z\leq x $ the RHS $\leq 0$ so we are done. For the other two cases, by symmetry, it remains to show the case $y&gt; x ; z &lt; x $. Rearranging terms, we can also write $$ (y-x)^3 - (x-z)^3 +x ((y - x) + (z-x))^2 \geq 0 $$ This holds true at least for $(y-x)^3 \geq (x-z)^3$ or $y+z\geq 2 x$. So the proof is complete other than for the case $y+z &lt; 2x$ and [ $y&gt; x ; z &lt; x $ or $z&gt; x ; y &lt; x $ ]. Path 2. For the remaining case $y+z &lt; 2 x $ and [$y&gt; x ; z &lt; x$ or $z&gt; x ; y &lt; x $] we will follow a different path. Again, by symmetry, we must inspect only $y+z &lt; 2x$ and $y&gt; x &gt; z$. A remark up front: In the following, some high order polynomials of one variable have to be inspected. MATLAB is used, also for plotting behaviours of these polynomials. There is no case in spending effort for further analytical work on polynomials where their behaviour is obvious. Still, what follows contains some "ugly" parts. In the squared version (1) of the inequality, we can use a further inequality which has been proved here: $$\sum_{cyc}\frac{(x-y)^2}{(x+z)(y+z)} \geqslant \frac{27}{8} \frac{(y-z)^2}{(x+y+z)^2}$$ So it suffices to prove $$ \frac16 \left[ \frac{27}{8} \frac{(y-z)^2}{(x+y+z)^2} \right ]^2 + \sum_{cyc}\frac{(x-y)^2}{(x+z)(y+z)} \geq \frac{(y-z)^2}{xy + yz + xz} $$ Some numerical inspection shows immediately that the first term cannot be ommitted. Clearing some denominators gives $$ \tag{2} \frac16 \left[ \frac{27}{8} \right ]^2 \frac{(y-z)^4 (xy + yz + xz) (x+y)(y+z) (z+x)}{(x+y+z)^4} + \\ (xy + yz + xz) \sum_{cyc} {(x-y)^2}{(x+y)} - (y-z)^2 (x+y)(y+z) (z+x) \geqslant 0 \quad $$ By homogeneity, we set $y=1+z$. The condition $y+z &lt; 2x$ then translates into $1+2z &lt; 2x$, hence we further set $x = z + (1 +q)/2$ where $0\leq q \leq 1$ since also $x = z + (1 +q)/2 &lt; y = 1 +z$. Inserting $y=1+z$ and $x = z + (1 +q)/2$ into (2) is straightforward, the result is lengthy (not displayed here). Let us start with focussing on the first term in (2), calling that fraction $F$: $$ F= \frac{(y-z)^4 (xy + yz + xz) (x+y)(y+z) (z+x)}{(x+y+z)^4} $$ With the setting $y=1+z$ this can be simplified to $$ F = \frac{(xy + yz + xz) (x+y)(y+z) (z+x)}{(x+y+z)^4} $$ Since by the settings $y=1+z$ and $x = z + (1 +q)/2$ both x and y are linear in z, the numerator of $F$ is of fifth order in $z$, whereas the denominator is of fourth order in $z$. In leading order, the whole term is therefore of first order in $z$ and will therefore rise with $z$ for large enough $z$. This motivates to show that indeed $$F(q,z) \geq F(q,z=0) = 2 \frac{(q+1)^2}{(q+3)^2}$$ for all $z$ and $q$. Showing that directly requires a condition $$ G = (xy + yz + xz) (x+y)(y+z) (z+x) (q+3)^2 - 2 (q+1)^2 (x+y+z)^4 \geq 0 $$ Inserting $y=1+z$ and $x = z + (1 +q)/2$ into $G$, and expanding the brackets, gives a very lengthy expression which however contains only positive terms, so the condition is proved immediately: $$ G = (z(2q^6z + 2q^6 + 22q^5z^2 + 51q^5z + 22q^5 + 80q^4z^3 + 358q^4z^2 + 357q^4z + 100q^4 + 96q^3z^4 + 960q^3z^3 + 1828q^3z^2 + 1206q^3z + 260q^3 + 864q^2z^4 + 3672q^2z^3 + 4788q^2z^2 + 2484q^2z + 450q^2 + 2592qz^4 + 7344qz^3 + 7398qz^2 + 3159qz + 486q + 2592z^4 + 5832z^3 + 4806z^2 + 1701z + 216))/4 \geq 0 $$ Hence it suffices, instead of (2), to prove the following: $$ \tag{3} \frac16 \left[ \frac{27}{8} \right ]^2 2 \frac{(q+1)^2}{(q+3)^2} + (xy + yz + xz) \sum_{cyc} {(x-y)^2}{(x+y)} \\ - (x+y)(y+z) (z+x) \geqslant 0 \quad $$ After insertion of $y=1+z$ and $x = z + (1 +q)/2$, the factors $(x-y)^2$ in the cyclic sum will not be functions of $z$. Hence, the LHS is a third order expression in $z$ with leading (in $z^3$ ) term $( 1 + 3 q^2) z^3$, so for large enough $z$ it is rising with $z$. A remarkable feature of this expression is that for the considered range $0\leq q \leq 1$ it is actually monotonously rising for all $z$. To see this, consider whether there are points with zero slope. The first derivative of the expression with respect to $z$ is $$ q^4/4 + (7q^3z)/2 + (7q^3)/4 + 9q^2z^2 + 9q^2z + (5q^2)/4 - (7qz)/2 - (7q)/4 + 3z^2 + 3z + 1/2 $$ Equating this to zero gives $$ z_{1} = -(2((13q^6)/4 + (17q^4)/2 + (133q^2)/4 + 3)^{(1/2)} - 7q + 18q^2 + 7q^3 + 6)/(36q^2 + 12)\\ z_{2} = -(-2((13q^6)/4 + (17q^4)/2 + (133q^2)/4 + 3)^{(1/2)} - 7q + 18q^2 + 7q^3 + 6)/(36q^2 + 12) $$ One now shows that in the range $0\leq q \leq 1$, there are only negative solutions $z_{1,2}$. So we will have no zero slopes for $z\geq 0$. Since the polynomials are (roots of) sixth order in q, we investigate the following figures: So monotonicity (rising with $z$) is established for all $q$. Hence, to show the inequality it suffices to inspect (3) at the smallest $z=0$. This gives $$ ((q + 1)(8q^6 + 88q^5 + 336q^4 + 432q^3 - 216q^2 - 405q + 243))/(64(q + 3)^3) \geqslant 0 \quad $$ or $$ \tag{4} 8q^6 + 88q^5 + 336q^4 + 432q^3 - 216q^2 - 405q + 243 \geqslant 0 \quad $$ An even stronger requirement is $$ h(q) = 432q^3 - 216q^2 - 405q + 243 \geq 0 $$ In the considered range $0\leq q \leq 1$, $h(q)$ has a minimum which is obtained by taking the first derivative, $$ 1296 q^2 - 432 q - 405 $$ and equating to zero, which gives $q = 3/4$, and the above $h(q)$ then gives $$ h(q = 3/4) = 0 $$ This establishes the inequality. $ \qquad \Box$
Very long partial answer : As in my other answer we have using abc method or uvw method the inequality : Let $a,b,c&gt;0$ $$\sum_{cyc}\frac{a}{b+c}\geq \sqrt{\frac{9}{4}+\frac{7}{4}\frac{\left(a^{2}+b^{2}+c^{2}-ab-bc-ca\right)}{ab+bc+ca}}$$ See WA factorization for the case wich is sufficient to show I mean ,$b=c=1$ (because it's homogenous). As partial answer we have the constraint : $$\frac{7*2}{4*3}(a^{2}+b^{2}+c^{2}-ab-bc-ca)\geq (a-b)^2$$ Hope it inspires you ! Update 22/04/2021 Let $a,b,c\in[0.5,1]$ then we have : $$\sum_{cyc}\frac{a}{b+c}\geq 1.5+0.5\frac{(a-b)^2}{ab+bc+ca}\geq \sqrt{\frac{9}{4}+1.5\frac{(a-b)^2}{ab+bc+ca}}$$ Proof : The RHS is equivalent to : $$0.5\frac{(a-b)^2}{ab+bc+ca}\geq 0$$ For the LHS we can use BW easily . We can go further as we have : $$\sum_{cyc}\frac{a}{b+c}\geq 1.5+0.5\frac{(a-b)^2(a+b+c)}{(a+b)(b+c)(c+a)} \geq 1.5+0.5\frac{(a-b)^2}{ab+bc+ca}\geq \sqrt{\frac{9}{4}+1.5\frac{(a-b)^2}{ab+bc+ca}}$$ With the constraint : $$(0.5(a+b-2c)(a^2-ab+b^2-c^2))\geq 0$$ Again I go further : We have $$\sum_{cyc}\frac{a}{b+c}\geq 1.5+\frac{44}{100}\frac{(a-b)^2(a+b+c)}{(a+b)(b+c)(c+a)} \geq \sqrt{\frac{9}{4}+\frac{132}{100}\frac{(a-b)^2(a+b+c)}{(a+b)(b+c)(c+a)}} \geq \sqrt{\frac{9}{4}+1.5\frac{(a-b)^2}{ab+bc+ca}}$$ With the first constraint : $$ (28 a^3 - 3 a^2 b - 47 a^2 c - 3 a b^2 + 44 a b c - 25 a c^2 + 28 b^3 - 47 b^2 c - 25 b c^2 + 50 c^3)\geq 0$$ And the second : $$\frac{132}{100}\frac{2}{3}(a+b+c)(ab+bc+ca)\geq(a+b)(b+c)(c+a)$$ Update 23/04/2021 : A general way would be : We have $$\sum_{cyc}\frac{a}{b+c}\geq 1.5+y\frac{(a-b)^2(a+b+c)}{(a+b)(b+c)(c+a)} \geq \sqrt{\frac{9}{4}+3y\frac{(a-b)^2(a+b+c)}{(a+b)(b+c)(c+a)}} \geq \sqrt{\frac{9}{4}+1.5\frac{(a-b)^2}{ab+bc+ca}}\quad(I)$$ With the constraint : $$-(0.5 (2 a^3 y - 2 a^3 - 2 a^2 b y + a^2 b + 2 a^2 c y + a^2 c - 2 a b^2 y + a b^2 - 4 a b c y + a c^2 + 2 b^3 y - 2 b^3 + 2 b^2 c y + b^2 c + b c^2 - 2 c^3))\geq 0$$ And :$$3y\frac{2}{3}(a+b+c)(ab+bc+ca)\geq(a+b)(b+c)(c+a)$$ So to be true we have : $$\frac{\left(a\left(a+b\right)\left(a+c\right)+b\left(b+c\right)\left(b+a\right)+c\left(a+c\right)\left(c+b\right)-1.5\left(a+b\right)\left(b+c\right)\left(c+a\right)\right)}{\left(a-b\right)^{2}\left(a+b+c\right)}-\frac{\left(a+b\right)\left(b+c\right)\left(c+a\right)}{2\left(a+b+c\right)\left(ab+bc+ca\right)}\geq 0$$ We have to establish an assumption because like this the inequality is false ($a,b,c&gt;0$ is not sufficient) A temporary and sufficient assumption is $a\in[0.25,0.5)$ and $b,c\in[0.5,1]$ Using the first inequality and using the variable $y$ as in the inequality $I$ We need to show for $a\ge c\ge b$ and $2c-2b\le a$ : $$\left(\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}-1.5\right)\left(ab+bc+ca\right)-\frac{\left(a-b\right)^{2}}{2}\geq 0\quad (C)$$ Update 24/04/2021 We have the inequality for $1\ge a\geq c\geq b&gt;0$ and $\frac{\left(a-b\right)^{2}}{ab+bc+ca}\le 1$ or $\frac{\left(a-b\right)^{2}}{ab+bc+ca}\ge 1.2$ and $2c-2b\ge a$: $$\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}\ge 1.5+\frac{\left(\sqrt{\frac{15}{4}}-1.5\right)\left(\left|a-b\right|\right)^{1.75}}{\left(ab+bc+ca\right)^{\frac{1.75}{2}}}\geq \sqrt{\frac{9}{4}+1.5\frac{(a-b)^2}{ab+bc+ca}} $$ Update 25/04/2021 To prove it we remark : $$f(c)=\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}-1.5$$ Is increasing with $a\leq 2c-2b$ $$g(c)=\left(ab+bc+ca\right)^{\frac{1.75}{2}}$$ Is increasing . So the product of two positives increasing functions is also an increasing function always with constraints above . It remains to show the case $a=2c-2b$ . As it's homogenous we get an inequality with one variable or a long polynomial .The RHS is trivial . Update 26/04/2021 : To prove partialy the inequality $(C)$ we have for $a\ge 2c\ge c\ge b$ and $2c-2b\le a$ : $$\left(\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}-1.5\right)\left(ab+bc+ca\right)-\frac{\left(a-b\right)^{2}}{2}\geq \frac{2(a^2+b^2+c^2-ab-bc-ca)}{3}\geq\frac{\left(a-b\right)^{2}}{2}\quad $$ Wich is trivial using the $uvw's$ method and the assumptions on $a,b,c&gt;0$ For the case $2c\geq a \geq c\geq b$ we add the assumptions $c+b\leq a $ and $a\geq 2c-2b$ remarking that : $$f(c)=\left(\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}-1.5\right)\left(ab+bc+ca\right)$$ Is a decreasing function always with the assumptions above .Remains to show the cases $a=c+b$ or $a=2c-2b$ wich is a one variable inequality since all the equalities/inequalities are homogenous.I shall prove that the function is decreasing later . To prove that the function is decreasing we differentiate twice we have : $$f''(c)=-\frac{2a^2b}{(a+c)^3}-\frac{2ab^2}{(b+c)^3 }+2$$ With the assumptions above the function $f(c)$ is convex so the derivative is increasing .Remains to show the inequality at the equality case wich is not too hard i think . From my first answer we have : $$\sum_{cyc}\frac{a}{b+c}\geq P(a,b,c)=\sqrt{\frac{9}{4}+\frac{9}{4}\frac{(c^4+a^4+b^4-c^2a^2-b^2a^2-c^2b^2)}{((a+b+c)\frac{3}{4}+\frac{3}{4}(abc)^{\frac{1}{3}})(a+b)(b+c)(c+a)}+\frac{(c^2+a^2+b^2-ca-ba-cb)(a+b+c)}{(a+b)(b+c)(c+a)}}$$ Remains to show : $$P(a,b,c)\geq \sqrt{\frac{9}{4}+\frac{3}{2}\frac{\left(a-b\right)^2}{ab+bc+ca}}$$ With assumptions $2c\geq a \geq c\geq b$ and $c+b\geq a $ and $a\geq 2c-2b$ . We have : Proof : we have the refinement of the inequality above : $$\frac{9}{1}\frac{(c^4+a^4+b^4-c^2a^2-b^2a^2-c^2b^2)}{((a+b+c)\frac{3}{4}+\frac{3}{4}(abc)^{\frac{1}{3}})(a+b)(b+c)(c+a)}\frac{(c^2+a^2+b^2-ca-ba-cb)(a+b+c)}{(a+b)(b+c)(c+a)}\geq \frac{9}{4}\frac{\left(a-b\right)^4}{(ab+bc+ca)^2}$$ Multiplying by the differents denominators and replacing $a$ by $c+b$ and $a$ by $2c-2b$ . Eliminating a variable because the inequality is homogenous we get a decreasing polynomial for the LHS and an increasing polynomial for the RHS wich are : $$4((2-2x)(1+x)+x)^2(2+x)((2-2x)^4+1+x^4-(x^2+1)(2-2x)^2)((2-2x)^2+1+x^2-(x+1)(2-2x))\geq ((x+1)(x+2)(1+2x))^2(0.75(2+2x)+0.75((1+x)x)^{\frac{1}{3}})$$ With $0.33\leq x\leq 0.3675$ and $x=\frac{b}{c}$ Hope it helps you !
What is $(7^{2005}-1)/6 \pmod {1000}$? What is $$\frac{7^{2005}-1}{6} \quad(\operatorname{mod} 1000)\:?$$ My approach: Since $7^{\phi(1000)}=7^{400}=1 \bmod 1000, 7^{2000}$ also is $1 \bmod 1000$. So, if you write $7^{2000}$ as $1000x+1$ for some integer $x$, then we are trying to $((1000x+1)\cdot(7^5)-1)/6 = (16807000x + 16806)/6 \pmod {1000}$. Obviously, this must be an integer, so $x=3y$ for some $y$. Then, we are trying to find $16807000\cdot 3y/6+2801 \pmod {1000} = 500y+801 \pmod {1000}$. However, this value can be $301$ or $801$, and I am not sure how to find which one is correct. Any help is appreciated!
By geometric sum formula we have $$\frac{7^{2005}-1}{6} = \frac{7^{2005}-1}{7-1} = 1+7+7^2+\dots+7^{2004}$$ The sequence $1,7,7^2,\dots$ has period $20 $ modulo $1000$ (since $7^{20} \equiv 1 \pmod{1000}$). $$1+7+7^2+\dots+7^{2004} \equiv 100(1+\dots+7^{19})+1+7+7^2+7^3+7^4 \equiv 100\cdot 0+801\equiv 801 \pmod{1000} $$
If $k$ and $n$ are not co-prime, you have to be very careful when dividing by $k\bmod n$ $-$ there may be more than one answer, as you saw. So $806/6\bmod 1000$ is not well-defined, because both $6\cdot 301$ and $6\cdot 801$ are equal to $806\bmod 1000$. But in this case the problem is well-defined, because the expression $\dfrac{7^{2005}-1}{6}$ is an integer. So we can calculate it before reducing $\bmod 1000$. hgmath's answer shows one way to do this (and Raffaele's answer doesn't).
standard Taylor series using substitution Find Taylor series using substitution about $0$ for $f(x)=\frac{125}{(5+4x)^3}$ by writing $\frac{125}{(5+4x)^3}=\frac{1}{(1+\frac{4}{5}x)^3}$? Determine a range of validity for this series.
Hint Start with Taylor expansion $$\frac{1}{1+y}=1-y+y^2-y^3+y^4-y^5+O\left(y^6\right)$$ Rise to the third power to obtain $$\frac{1}{(1+y)^3}=1-3 y+6 y^2-10 y^3+15 y^4-21 y^5+O\left(y^6\right)$$ Now, replace $y$ by $\frac{4 x}{5}$ to get your result. I am sure that you can take from here.
you have $(1+4/5x)^{-3}$=1-12/5x+64/25$x^2$..... is the required exxpansion about x=0. for range, its true for |4/5x|&lt;1
How do I solve $y'=\frac{x-y}{x+y}$? This is what I have so far: $y'=\frac{x-y}{x+y}$ $ y'=\frac{x}{x+y}-\frac{y}{x+y}$ $ y'=\frac{1}{1+\frac{y}{x}}-\frac{1}{\frac{x}{y}+1} $ Make a substitution here: $v=\frac{y}{x}, y=vx, y'=v'x+v$ $ v'x+v = \frac{1}{1+v}-\frac{1}{\frac{1}{v}+1}$ Do I just simplify the right side here, and then separate variables? When I tried this, I got: $\int \frac{1+v}{1-2v-v^2}dv = lnx+C_0$ I didn't know how to solve the integral here. Any ideas?
You are complicating it. Sum $1$ to both sides to get $$y'+1=\frac{x-y}{x+y}+1=\frac{2x}{x+y}.$$ Now, because $(x+y)'=y'+1$, we get $$(x+y)'=\frac{2x}{x+y}.$$ Familiar, right? Write $u=y+x$, so $u'=2x/u \implies u^2/2=x^2+A$. We finally have $$y=-x\pm\sqrt{2x^2+B}.$$
You can solve this integral by using the partial fractions technique. But tell me this. That simplification is correct? You got this result below: $$\int \frac{1+v}{1-2v-v^2}dv$$ I've got this one: $$\int \frac{1-v}{1-v}dv$$ That is 1 if v is different than 1, right? Am I off the track here? You got $$ \frac{1}{1-v}- \frac{1}{\frac{1}{v}-1} = K$$ $$\therefore K = \frac{1}{1-v} -\frac{1}{\frac{1-v}{v}} $$ $$ \therefore K = \frac{1}{1-v} - \frac{v}{1-v} $$ $$ \therefore K = \frac{1-v}{1-v}=1$$ So v can't be 0 and can't be 1 neither.
Homework Question. Joint Probability Distribution. Here is the question. The joint PDF of X and Y is given by $f_{XY}(x,y) = {\frac 14} e^{-|x|-|y|}$. Find $P(X \le 1 ,and, Y \le 0)$ Solving the problem I first found the marginal probabilities of X and Y. Can you please explain what I should do next.
To find the probability of some event $E$, you have to compute $$ P(E) = \int_E \:dF $$ where $F$ is your probability distribution. In your case, since you probability distribution has a density, you can also express that as $$ P(E) = \int_E f \:d\lambda $$ where $\lambda$ is the lebesgue measure on your probability space (which then is a subset of some $\mathbb{R}^d)$. The lebesgue measure is the ususal measure or length/area/volume/$\ldots$, so this is just a plain old integral. Additionally, in your case $E = (-\infty,1]\times(-\infty,0]$ is rectangular, which makes the integration especially simple. You get $$ P(E) = \int_E f \:d\lambda = \int_{-\infty}^1\int_{-\infty}^0 f(x,y) \:dx\,dy \text{.} $$
Solution using the mathStatica / Mathematica combo: The joint pdf $f(x,y)$ is given as: f = Exp[-Abs[x] - Abs[y]]/4; domain[f] = {{x, -∞, ∞}, {y, -∞, ∞}}; You seek: Prob[x &lt; 1 &amp;&amp; y &lt; 0, f] returns: $\frac 12 - \frac{1}{4 e}$ [ Even if this is homework, I guess your lecturer will want to see some workings, but this will give you something to aim at :) ]
A problem with planes, normal equations, lines and points. I was given the following problem and would like to know if my idea oh how to solve it is correct. I usually type my answers but due to my sketch I'll attach a picture with my interpretation of the problem and its possible solution Question: Find the normal of the lines $l$ that pass through a point $(1,0,2)$ and is orthogonal to a plane whose normal equation is $x+y-2z = 4$. Basically my idea was the following, if I have a line orthogonal to a plane in $R^3$ then the direction vector of such plane will be parallel to my line. Then if I want to find the normal of the lines that pass through a point $P_o$ A line can be held with two vectors $(A,B,C)$ and $(a,b,c)$ whose dot product with the direction vector $(1,1,-2)$ is normal to the line and passes through $P_o$ Please, let me know if my work is correct or if I went wrong where did I fail to interpret the problem, what is wrong with my sketch. I genuinely appreciate any help thank you.
$(1,1,-2)$ is normal to your plane. This is the direction vector of your line. Vector equation: $L: (1,0,2) + (1,1,-2)t$ Parametric equations: $x = 1+t, y = 0+t, z = 2-2t$ Simultaneous equations: $\frac {x-1}{1} = \frac {y-0}{1}= \frac {z-2}{-2}$
$(1,1,-2)$ is normal to your plane. This is the direction vector of your line. Vector equation: $L: (1,0,2) + (1,1,-2)t$ Parametric equations: $x = 1+t, y = 0+t, z = 2-2t$ Simultaneous equations: $\frac {x-1}{1} = \frac {y-0}{1}= \frac {z-2}{-2}$
Condition on a point on axis of the parabola so that $3$ distinct normals can be drawn from it to the parabola. The question is this: For the parabola: $$ (x-1)^2 + (y-1)^2 = \left(\dfrac{x+y} {\sqrt2}\right)^2 $$ what is the condition on the point $(h,h)$ (which lies on the axis of the parabola) that $3$ distinct normals can be drawn from it to the parabola? Now, I have this doubt: from any point (inside the axis) on the axis we can draw three normals to the parabola (no concrete proof, just visualisation), right? So isn't this question wrong?
After all the discussion in the comments part we have, perhaps, the solution to the edited question: From a point on the parabola's axis it is possible to draw three different normals to the parabola iff the point is contained ("inside") the parabola but it is not the parabola's vertex (which could be considered not to be "inside" the parabola, anyway...but it's better, imo, to be thorough here and mention this). Added: Thanks to one of the last comments of Henning below this answer, it is possible to see that the $\,y-$intercept of any normal to $\,y=x^2\,$ is $\,\frac12+a^2\;$ , from where it follows that no point on the parabola's axis and inside the parabola can have $\,y-$intercept $\,\le\frac12\;$ and have any hope of being one of the wanted points from which three normals to the parabola can be drawn!
This can be done in a more generic manner with all variables, in which case, from any point $P(h,k)$ there exist three normals to a parabola $y^2 = 4ax$ if $h&gt; 2a$.
We throw two dices distinc. What is the probability that there will be at least a 6 given that the dice show different results? We throw two dices distinc. What is the probability that there will be at least a 6 given that the dice show different results? My work Let $A=$We have a six, $B=$Different result Then $P(A|B)=\frac{P(A\cap B)}{P(B)}=\frac{1}{6}$ This because: $P(B)=\frac{30}{36}$ and $P(A\cap B)=\frac{5}{36}$ Is good this?
Observe that the numbers on the two dice are distinct. They are also indistinguishable, so that any one of the six numbers has equal probability of showing. The probability that one of them is a six is therefore one third (two numbers out of six are showing).
I agree until your calculation for $P(A\cap B)=\frac{5}{36}$. Note that A or B could be the $6$, and that we want them to not both be 6. There's $12$ ways that A or B is $6$ and $1$ way that they are both $6$, so $P(A\cap B)=\frac{12 - 1}{36}=\frac{11}{36}$. Substitute this back in and the final answer would be $\frac{11}{30}$
minimizing the norm of a curl over a domain According to my computations: The function which minimizes $$\int_\Omega \|\operatorname{curl} f\|^2\,dx$$ should satisfy $$\operatorname{curl}(\operatorname{curl}f) = 0$$ everywhere on $\Omega$, provided $\operatorname{curl} f = 0$ on $\partial \Omega$. I followed the same kind of computation that the one demonstrating the the argmin of $\int_\Omega \|\nabla f\|^2~dx$ should satisfy $\Delta f = 0$. However, I am not sure whether my computations are right... May anyone check that please ? 1) We first start with a functional $$G(f) = \int_\Omega \|\operatorname{curl} f\|^2\,dx.$$ 2) We compute $$V(f,h) = \lim_{\epsilon\rightarrow 0} \frac{G(f+\epsilon h)-G(f)}{\epsilon} = 2\int_\Omega \operatorname{curl} f\cdot\operatorname{curl} h\,dx.$$ 3) We use the identity : $$\operatorname{div}(A\times B) = -A\cdot\operatorname{curl} B + B\cdot\operatorname{curl} A,$$ with $A=\operatorname{curl} f$ and $B=h$. 4) We obtain $$\int_\Omega \operatorname{curl} f\cdot \operatorname{curl} h\:dx = -\int_\Omega \operatorname{div}(\operatorname{curl} f\times h)\,dx + \int_\Omega h\cdot \operatorname{curl}(\operatorname{curl}f)\,dx.$$ 5) We use the divergence theorem to obtain : $$\int_\Omega \operatorname{curl} f\cdot \operatorname{curl} h\,dx = -\int_{\partial\Omega} \operatorname{curl} f\times h\,ds + \int_\Omega h\cdot\operatorname{curl}(\operatorname{curl}f)\,dx$$ 6) We assumed $\operatorname{curl} f = 0$ on $\partial\Omega$, so the first term is zero. 7) $V(f,h)$ should equal zero for all $h$ for the function to be minimized, so $$\int_\Omega h\cdot\operatorname{curl}(\operatorname{curl}f) dx = 0\quad\forall h,$$ which implies $\operatorname{curl}(\operatorname{curl}f) = 0$ locally. I guess this reasonning may be wrong in several places..... or is it right?? Thanks!
OP is mostly perfect until some later parts. For the curl operator we have a similar integration by parts formula to the divergence theorem: $$ \int_{\Omega}\nabla \cdot (u\nabla v) = \int_{\Omega}(u\Delta v + \nabla u\cdot \nabla v) = \int_{\partial \Omega} u\nabla v\cdot \nu \,dS, $$ where $\nu$ is the outward unit normal to the boundary. The curl integration by parts formula is: $$ \int_{\Omega}\nabla \cdot (\phi\times \psi) = \int_{\Omega}(\psi\cdot\nabla \times \phi - \phi\cdot \nabla\times \psi) = \int_{\partial \Omega} \phi\times \psi\cdot \nu \,dS = \int_{\partial \Omega} \nu\times \phi\cdot \psi \,dS.\tag{1} $$ Say now you wanna minimize (here I use $u$ and $v$ instead of $f$ and $h$): $$ \mathcal{G}(v) = \int_{\Omega}|\nabla\times v|^2\,dx, $$ if the minimizer is $u$ and the variational limit is also no problem: $$ 0= \frac{d}{d\epsilon}\mathcal{G}(u+\epsilon v)\Bigg|_{\epsilon= 0} = 2\int_{\Omega} \nabla \times u\cdot \nabla \times v. $$ Now we use formula (1), letting $\phi = \nabla \times u$, and $\psi = v$: $$ \int_{\Omega}(v\cdot\nabla \times (\nabla \times u)- \nabla \times u\cdot \nabla\times v) =\left\{\begin{aligned} \int_{\partial \Omega} \big(\nu\times (\nabla \times u)\big)\cdot v\,dS \\ \\ \int_{\partial \Omega} (v\times \nu) \cdot (\nabla \times u)\,dS \end{aligned}\right. $$ There are two choices of way of writing boundary terms to be zero: You can set for the minimizer $\color{blue}{\nu\times (\nabla \times u)|_{\partial \Omega} = 0}$, and only tangential part suffices. $\nabla \times u|_{\partial \Omega = 0}$ is too strong. You can set for the test function spaces $\color{red}{\nu\times v = 0}$, then $\nu \times u$ can be prescribed as any permissible boundary data $g$ (those who have 0 surface divergence). Choosing blue, you have to modular a gradient in a proper function space $H$ to set up a well-defined problem: $$ \left\{\begin{aligned} \nabla\times (\nabla \times u) &amp;=0 \quad \text{ in } \Omega, \\ \nu \times (\nabla \times u) &amp;= 0\quad \text{ on } \partial \Omega. \end{aligned}\right.\tag{2} $$ For $u \in H/\nabla p$ the quotient space, which means we don't tell the difference between $u $ and $u+\nabla p$, above problem has a unique solution. The only solution for (2) is zero, a.k.a. $u = 0$ or say $u = \nabla p$. Once we add a gauge condition $\nabla \cdot u = f$, then (2) is equivalent to the Neumann problem for $p$: $$ \left\{\begin{aligned} \Delta p &amp;= f \quad \text{ in } \Omega, \\ \nabla p\cdot \nu &amp;= 0\quad \text{ on } \partial \Omega. \end{aligned}\right. $$ Choosing red, the problem is: $$ \left\{\begin{aligned} \nabla\times (\nabla \times u) &amp;=0 \quad \text{ in } \Omega, \\ \nu \times u &amp;= g\quad \text{ on } \partial \Omega. \end{aligned}\right.\tag{3} $$ Again this problem has a unique solution in $H/\nabla p$, except this time $p$ is constant on the boundary which implies $\nabla p\times \nu =0$. If we set $g$ to be 0, then above problem again has a zero solution, which means $u = \nabla p$. Now a gauge condition $\nabla \cdot u = f$, then (3) can be transformed to a Dirichlet boundary problem for $p$: $$ \left\{\begin{aligned} \Delta p &amp;= f \quad \text{ in } \Omega, \\ p &amp;= \mathrm{Constant}\quad \text{ on } \partial \Omega. \end{aligned}\right. $$ Summary: the minimizer $u = \mathrm{arg}\min\mathcal{G}(v)$ is a gradient field $\nabla p$, the boundary condition of $p$ relies on the boundary condition of $u$. Both problem (2) and (3) don't require the normal component of $\nabla \times u$ or $u$ to be imposed. If you prescribe $\nabla\times u = 0$ on the boundary, which implies $\nabla\times u\cdot \nu = 0$, the problem may be over-determined.
(This is supposed to be a shorter comment but I find myself powerless). I haven't checked you reasoning, but I wanted to offer a simple (standard) proof. Some authors call this Dirichlet's principle. Define $E(f) := \int_{\Omega} \| \nabla f \|^2 dx$. It's clear that $E(f) \geq 0$. Assume $f, g$ are equal on $\partial \Omega$ Let's see that, if $f$ is harmonic then $E(f) \leq E(g)$. For that, let $u := f - g$. Now calculate $E(g) = E(f - u)$ using Green's first identity (here you'll use that $u = 0$ on $\partial \Omega$ and that $\Delta f = 0$). You'll get immediately that $E(g) = E(f) + E(u)$ , and using that $E(u) \geq 0$ that $E(f) \leq E(g)$.
Find all solutions to a particular differential equation Find all solutions on ${R}$ of the differential equation $ y' = 3|y|^ \frac{2}{3} $ I believe I need to use separation of variables, but it will only work if the initial condition is nonzero. Therefore, not all solutions can be found this way.
Observe that: /y/^(2/3) = y^(2/3). So we solve: dy/dx = 3*y^(2/3). Use separation of variables: y^(-2/3)dy = 3dx. So: 3*y^(1/3) = 3x + C. So: y = (x + C)^3 with y(0) = C^3.
Observe that: /y/^(2/3) = y^(2/3). So we solve: dy/dx = 3*y^(2/3). Use separation of variables: y^(-2/3)dy = 3dx. So: 3*y^(1/3) = 3x + C. So: y = (x + C)^3 with y(0) = C^3.
$p^2$ misses 2 primitive roots When I Checked primitive roots of some primes P, I found this following phenomenon: $14$ is a primitive root of prime $29$, but it's not primitive root of $29^2$ $18$ is a primitive root of prime $37$, but it's not primitive root of $37^2$ $19$ is a primitive root of prime $43$, but it's not primitive root of $43^2$ $11$ is a primitive root of prime $71$, but it's not primitive root of $71^2$ And they are all missing exactly one primitive root, which is P has one primitive root that cannot be found in primitive roots of $p^2$. My question is: What is the smallest prime P such that P has $2$ primitive roots that cannot be found in the primitive roots of $p^2$? ( Here I mean primitive roots between $0$ and $p-1$)
A small search with pari-gp shows that 367 is the smallest such prime, it misses the primitive roots 159 and 205. Then 653 misses four primitive roots 84,120,287 and 410. A search up to 20000 shows that only 16631 misses 4, while several (1103, 6569, 13187, 14939, 15313, 16649 and 18587) misses 3 primitive roots. By curiosity I have mesured how many primes have 0, 1, 2, .... primitive roots in the range 1, p-1 which are not primitive roots of $p^2$, it seems to behave like a Poisson distribution with $\lambda = -\log\log 2$. does somebody has an explanation? Just to show how good is this estimate here is the count up to 409499 of primes with 0 to 6 primitive roots missing in mod $p^2$. Observed Expected 0 23 949 23 962 1 8 695 8 782 2 1 696 1 609 3 210 197 4 19 18 5 0 1 6 1 0 So we should expect to have some prime with 7 or more after about 8 000 000 primes.
If you take a prime like 7, its square is 49, has 42 co-primes roots 0-49. These can be represented as some x=g^n, where n ranges from 0 to 6*7-1 = 41. When n is a multiple of p, then p divides its own period. For example, 3 is a primitive root of 7. 3^5 gives 5 mod 7, and 3^35 gives 19 mod 49. We now observe that 19 is a primitive root of 7, but in base 19, 7 divides its own period. The number of small sevenites (ie 1 &lt; n &lt; p), is relatively small, none has gone as far as 20 in the search range, although 3511 has 10. A run was done to find the 40 smallest sevenites for primes to something like 2,400,000. This is done by insertion-sort using 60p as the upper banker. From this table (it is about 2 cdrom's worth of zipped files), one extracts the list of small sevenites. There are lots of bases which do not have small sevenites. A compound sevenite is one where a higher power divides b^(p-1). An example is the third-order 7³ divides 18^3-1, and the fourth-order 13^4 dividing 239^4-1. There is only one small compound sevenite: 113³ in base 68. No sevenite is known for the fibonacci series, but the sqrt(2) series 1,2,5,12,... has no fewer than 3 (13, 31 and 1546463), and the Heron triangle series as used to find the Messerine primes, and also integer area triangles of the form b-1, b, b+1, has one (103), these searches been taken to two millions (ie 2*120^4). Sevenites are closely connected to the thing about 'x²=x, mod b^n'. In practice, these are specific examples of the general case. Where p is a sevenite of x, then x written in base p will agree to the last n places. For example, 10 is a primitive root and sevenite of 487. In base 487, the sevenite tail ending in 10 is ... 0, 10. The 0 in column-one tells us that it's a small sevenite. In base 13, the number 239 is written as 155. The sevenite tail for 5, in 13 is 01550155. Because the last four, but not five digits agree with 0000155, we see that 13^4 divides it, but 13^5 does not. There is a fairly authoritive page on the wikipedia at `Euler's quotient'. There's a list of sevenites http://z13.invisionfree.com/DozensOnline/index.php?showtopic=737 or the DozensOnline forum | mathematics | number theory | sevenites.
Proving that $k|(n^k-n)$ for prime $k$ Prove that for any integer $n$,we have $(n^k)- n$ is divisible by $k$ for $k=3,5,7,11,13$ I tried using prime factorization but that does not work here
HINT: there are several ways to prove the congruence of the OP, which corresponds to Fermat little theorem. A rather simple one: consider the sequence of integers $$n,2n,3n,…,(k−1)n$$ Among these integers, none is congruent modulo k to the others (this is not difficult to prove). So ${1,2,3,…,k−1}$ is the set of reduced residue system modulo k. Taking the product, we have $$n \cdot 2n \cdot 3n \cdot \dots \cdot (k-1)n \\ \equiv 1 \cdot 2 \cdot 3 \cdot \cdots \cdot (k-1) \pmod k $$ and then $$n^{k-1} \cdot (k-1)! \equiv (k-1)! \pmod k$$ Now you can easily complete the proof.
Agreed, this is just Fermat's little theorem restated, i.e. $n^{k-1 }\equiv 1\pmod k$ where$ k$ is prime. So $n^k\equiv n\pmod k$ , which is what you are trying to show.
How do you prove each $x$ for $a^x=y$ is unique? I'm not looking for limits or any calculus-related argument, I want to know how to prove uniqueness on a more fundamental level. I am at the point where I want to show $$a^x=a^y \implies x=y$$ but if I haven't yet proven the existence of a logarithm, how could it possibly be possible to show that $x=y$ ? There is no way to get rid of that base $a$, but the fact that someone has already defined $\log_a(x)$ implies someone somehow did so some centuries ago. In other uniqueness arguments like of rational functions, you can manipulate both sides with rational operations, but you can't do that here because you haven't proven a logarithm yet! So how could anyone possibly show $x=y$ for $x&gt;0$, $y&gt;0$. Unless, can I use the properties of a logarithm after only making an argument for the existence of $x$ such that $a^{x}=y$ even if I haven't yet proven uniqueness and then use the existence to prove uniqueness for $a \neq 1$?
Without knowing about logarithms, the question need not even begin to make sense, since it is unclear just what $a^x$ might mean, even for positive $a$, when $x$ is not restricted. You say in the comments that you would define the function $f(x)=a^x$ simply as a function with the property that $$\begin{align*} f(1) &amp;= a\\ f(x+y) &amp;= f(x)f(y). \end{align*}$$ However, these properties do not suffice to conclude that $f$ is one-to-one (which is required in order to conclude that $f(x)=f(y)$ implies $x=y$), or that there is a unique $x$ such that $f(x)=y$ for a given $y$ (or at most one such $x$ if you don't want to assume surjectivity). In particular, if we assume the Axiom of Choice, then there are functions that satisfy both $f(1)=a$ and $f(x+y)=f(x)f(y)$, but that are not one to one. "Explicitly" (modulo the Axiom of Choice), let $\beta$ be a basis for $\mathbb{R}$ as a vector space over $\mathbb{Q}$ such that $1\in\beta$. Then any function $g\colon \beta\to\mathbb{R}$ can be extended to an additive function $g\colon\mathbb{R}\to\mathbb{R}$; that is, a function defined on all of $\mathbb{R}$, whose values at $\beta$ are as specified, and such that for all $x,y\in\mathbb{R}$, we have $g(x+y)=g(x)+g(y)$. Now, define $g\colon \beta\to\mathbb{R}$ by letting $g(1) = 1$ and $g(r)=0$ for all $r\in\beta$, $r\neq 1$. Then define $f\colon\mathbb{R}\to\mathbb{R}$ by $f(x) = a^{g(x)}$. Then $f(1) = a^{g(1)} = a^1= a$; and $f(x+y) = a^{g(x+y)} = a^{g(x)+g(y)} = a^{g(x)}a^{g(y)} = f(x)f(y)$. So this function $f$ satisfies the two given equations. However, $\beta$ is uncountable, so pick $r\neq 1$ that is in $\beta$. Then $f(r) = a^{g(r)} = a^0 = 1$, and $f(0) = 1$ (since $g(0)=0$ must hold for $g$ to be additive). However, $r\neq 0$, since $0$ cannot be an element of $\beta$. Thus, the two conditions $f(1)=a$ and $f(x+y)=f(x)f(y)$ do not suffice to show that $f$ is one-to-one. Which means you need to specify a lot of other stuff; specifically, one need to know exactly what properties you are giving the function $f$. (Yes, I know I'm using the exponential function to define this; but the point is that there are interpretations of the function $f$ that make all the assumptions true but the desired conclusion false, which means that one cannot prove the fact that $f$ is one-to-one using only the assumptions listed) It is difficult to define either the exponential or the logarithm at the basic level of calculus-before-the-fundamental-theorem. One can define the exponential function by first defining the functions $a\longmapsto a^n$ with $n$ a positive integer, inductively. Then for $n$ a negative integer using reciprocals. Then for $n$ the reciprocal of a positive integer using inverse functions. Then for $n$ a rational using $a^{p/q} = (a^p)^{1/q}$. Then prove that if $\frac{p}{q}=\frac{r}{s}$ with $p,q,r,s$ integers, $r,s\gt 0$, then we get $a^{p/q}=a^{r/s}$. Then define $a^x$ for arbitrary $x$ by letting $(q_n)$ be a sequence of rationals such that $q_n\to x$ as $n\to\infty$, and showing that the sequence $a^{q_n}$ is Cauchy and converges to a number we call $a^x$. Then showing that if $(q_n)$ and $(r_n)$ are two sequences of rationals that both converge to $x$, then $\lim_{n\to\infty} a^{q_n} = \lim_{n\to\infty} a^{r_n}$. And once all of this has been done, then one can show that the function is stricitly monotone when $a\gt 0$, $a\neq 1$, to deduce what you want (and hence that it has an inverse and logarithms exist). Obviously, this requires a lot of work. Or one can use integrals and define the natural logarithm by $$\ln(x) = \int_1^x \frac{1}{t}\,dt$$ for $x\gt 0$. Using the Fundamental Theorem of Calculus one can show that this function is continuous and differentiable; using the properties of the integral that it is strictly increasing, and so has an inverse. Call the inverse the exponential function $\exp(x)$; and then define $a^x = \exp(x\ln(a))$. And then prove that this function is strictly monotone when $a\gt 0$, $a\neq 1$. This requires enough Calculus to prove the Fundamental Theorem of Calculus first. Again, a lot of work. "Someone did it centuries ago"... The logarithm is a pretty recent "invention" as these things go, and it is closely connected with the development of calculus. Actual formal proofs of its properties (as well as actual formal proofs of the properties of the general exponential function) date from after the invention of calculus, and generally require some analysis or some calculus. I don't think you can really prove this via "elementary", non-analysis, non-calculus methods.
I would like to give a simple explanation.Let us consider a^x=a^y. Which means aaa*-----(x times) = aaa*------- (y times). It means aa (x times)/aa (y times) =1. This is possible only when both numerator and denominator has same number of terms. Which yields x=y.
"All math is useful eventually" We have all heard the argument : a lot of mathematics that was thought to be useless, abstract constructions with no links to the real world ended up being of use, like some arithmetic is useful in crypto. Some people say that the same applies to all of mathematics. My question is : do you buy this ? More precisely, and to make this into a question that I hope fits the requirements of this website : Do you have an example of a mathematical theory (i.e. not an isolated theorem, but a coherent set of mathematical concepts and theorems) that you believe will be of no use, ever, to let's say engineers or physicists or non-mathematician scientists in general ? By of use I mean that it is a mathematical object so relevant to a field or a model that non-mathematicians have to think in terms of this mathematical theory, OR that it is a crucial ingredient to the mathematical proof of some other useful mathematical result. Giving one proof among other, more simple ones is not sufficient. And a theory which language can be used to describe certain models but gives no significant insight or power of prediction doesn't count. A few examples : hamiltonian systems are a good way to model most mecanical systems : useful Fourier transforms are a useful tool in numerous calculations : useful not being an expert in mathematical physics I can't give a precise example, but I would classify homology theory in useful because it is such a powerful mathematical tool that I'm sure it is a necessary ingredient to something with real-life applications, or will be tilings : applications in chemistry, including recently one with Penrose's aperiodic tilings : useful p-adic analysis : to my knowledge not useful. Let me stress the "to my knowledge", as I know next to nothing about p-adic analysis and what may or may not be its applications.
I think "all math is eventually useful" must be false... There is certainly math that gets abandoned when better things come along, as with cylindrical algebras (as I understand their history). There are things like first-order mereotopology which have seen little development outside philosophy departments (and not much in them either). Arguably (and I say this with much sadness), NF will never catch on as a serious set theory so much as a source of odd model theory; probably likewise with NFU (which is even less deserving of such a fate). Not that I think uselessness is a bad thing, of course.
"Do you have an example of a mathematical theory (i.e. not an isolated theorem, but a coherent set of mathematical concepts and theorems) that you believe will be of no use, ever, to let's say engineers or physicists or non-mathematician scientists in general ?" Take Laplace Transform (LT) and Fourier Transform (FT) for examples. Both are used widely in engineering and physics. Both use infinity. Infinity is not there in nature and in engineering. Therefore these theories are false and cannot or should not be used. A large number is never an approximation to infinity. If you replace infinity, by a large number, then the characteristics of both LT and FT will change. The result of applications will become false. For example, finite LT will not have poles any more. Therefore by using LT we are corrupting engineering by introducing poles. Similarly, finite FT will change capacity theorem, and uncertainty principle (UP). Finite FT will remove uncertainty from quantum mechanics (QM). Take a look at chapter one on Truth and another chapter on QM in the book at the blog site https://theoryofsouls.wordpress.com/
Begin with one cell, which can die, do nothing, transform to 2 or 3 cells, with probability 1/4 respectively. How's the probability of extinction? A colony begins with a cell, which can die, do nothing, transform to two or three cells, with probability 1/4 for each case at next time point. Children cells share the same property described above. What's the probability of this colony's extinction? I got two solutions, $1$ and $\sqrt{2}-1$, from a simple recursive equation, $p=\frac{1}{4}(1+p+p^2+p^3)$. But I've no idea which one is correct. Thanks for your help!
You've done most of the work; what remains is to decide whether $1$ or $\sqrt2-1$ is the probability of eventual extinction. For that purpose, let $x_n$ be the probability of the event that the population goes extinct at or before the $n$-th time step. Notice that these events form an increasing sequence with respect to $\subseteq$, so the probability of their union, the event of eventual extinction, is the supremum (and also the limit) of the increasing sequence of numbers $x_n$. The issue is therefore whether this sequence ever gets above $\sqrt2-1$. We have $x_0=0$ (since the population is initially a single cell, not extinct), and $$ x_{n+1}=\frac14(1+x_n+{x_n}^2+{x_n}^3) $$ (because of the rules for how the cells multiply or die or do nothing). Notice that this equation says $x_{n+1}=f(x_n)$, where $f(x)=\frac14(1+x+x^2+x^3)$ is the function whose fixed points you already calculated. In particular, you know that $f(\sqrt2-1)=\sqrt2-1$. But $f(x)$ is clearly an increasing function of $x$ as long as $x\geq0$. So, since $x_0&lt;\sqrt2-1$, we get, by induction on $n$, that $x_n&lt;\sqrt2-1$ for all $n$. Therefore, the probability of eventual extinction is not $1$ but $\sqrt2-1$.
I have translated this into a birt/death-process. Often one can set the master equations and derive the macroscopic relations from the microscopic. As there is no non-linearity (no liason of the cells) the rarction rates correspond to your probability of $1/4$ and we get $$C \xrightarrow{k} D$$ $$C \xrightarrow{k} 2C$$ $$C \xrightarrow{k} 3C$$ $$C \xrightarrow{k} C$$ $$\frac{d C}{d t}=-k\,C+kC+k\,C+k\,C$$ $$\frac{d C}{d t}=2k\,C$$ Not sure if there is any threat to the colony to ever die out. $C$ in the equations represents the Expectation value (first moment) of the living cells over the population. The above calculation holds if you start with a large population. If you start with one cell you will need to set up the master equations with the transition probailities and follow the progress of how your probability distribution changes over each time step. At the start it is $1/4$ then less and less. Larger the population gets less the probability that it dies out. Once large enough (law of large numbers) it will follow the above equations.
How does Cantor's diagonal argument work? I'm having trouble understanding Cantor's diagonal argument. Specifically, I do not understand how it proves that something is "uncountable". My understanding of the argument is that it takes the following form (modified slightly from the wikipedia article, assuming base 2, where the numbers must be from the set $ \lbrace 0,1 \rbrace $): \begin{align} s_1 &amp;= (\mathbf{0},1,0,\dots)\\ s_2 &amp;= (1,\mathbf{1},0,\dots)\\ s_3 &amp;= (0,0,\mathbf{1},\dots)\\ \vdots &amp;= (s_n \text{ continues}) \end{align} In this case, the diagonal number is the bold diagonal numbers $(0, 1, 1)$, which when "flipped" is $(1,0,0)$, neither of which is $s_1$, $s_2$, or $s_3$. My question, or misunderstanding, is: When there exists the possibility that more $s_n$ exist, as is the case in the example above, how does this "prove" anything? For example: \begin{align} s_0 &amp;= (1,0,0,\mathbf{0},\dots)\ \ \textrm{ (...the wikipedia flipped diagonal)}\\ s_1 &amp;= (\mathbf{0},1,0,\dots)\\ s_2 &amp;= (1,\mathbf{1},0,\dots)\\ s_3 &amp;= (0,0,\mathbf{1},\dots)\\ s_4 &amp;= (0,1,1,\mathbf{1},\dots)\\ s_4 &amp;= (1,0,0,\mathbf{1},\dots)\ \ \textrm{ (...alternate, flipped } s_4\textrm{)}\\ s_5 &amp;= (1,0,0,0,\dots)\\ s_6 &amp;= (1,0,0,1,\dots)\\ \vdots &amp;= (s_n \text{ continues}) \end{align} In other words, as long as there is a $\dots \text{ continues}$ at the end, the very next number could be the "impossible diagonal number", with the caveat that it's not strictly identical to the "impossible diagonal number" as the wikipedia article defines it: For each $m$ and $n$ let $s_{n,m}$ be the $m^{th}$ element of the $n^{th}$ sequence on the list; so for each $n$, $$s_n = (s_{n,1}, s_{n,2}, s_{n,3}, s_{n,4}, \dots).$$ ...snip... Otherwise, it would be possible by the above process to construct a sequence $s_0$ which would both be in $T$ (because it is a sequence of 0s and 1s which is by the definition of $T$ in $T$) and at the same time not in $T$ (because we can deliberately construct it not to be in the list). $T$, containing all such sequences, must contain $s_0$, which is just such a sequence. But since $s_0$ does not appear anywhere on the list, $T$ cannot contain $s_0$. Therefore $T$ cannot be placed in one-to-one correspondence with the natural numbers. In other words, it is uncountable. I'm not sure this definition is correct, because if we assume that $m = (1, \dots)$, then this definition says that "$s_n$ is equal to itself"&mdadsh;there is no "diagonalization" in this particular description of the argument, nor does it incorporate the "flipping" part of the argument, never mind the fact that we have very clearly constructed just such an impossible $T$ list above. An attempt to correct the "diagonalization" and "flipping" problem: $$s_n = (\lnot s_{m,m}, \lnot s_{m,m}, \dots) \quad \text{where $m$ is the element index and} \quad\begin{equation}\lnot s_{m,m} = \begin{cases}0 &amp; \mathrm{if\ } s_{m,m} = 1\\1 &amp; \mathrm{if\ } s_{m,m} = 0\end{cases}\end{equation}$$ This definition doesn't quite work either, as we immediately run in to problems with just $s_1 = (0),$ which is impossible because by definition $s_1$ must be $ = (1)$ if $s_1 = (0)$, which would also be impossible because... it's turtles all the way down!? Or more generally, with the revised definition there is a contradiction whenever $n = m$, which would seem to invalidate the revised formulation of the argument / proof. Nothing about this argument / proof makes any sense to me, nor why it only applies to real numbers and makes them "uncountable". As near as I can tell it would seem to apply equal well to natural numbers, which are "countable". What am I missing?
First, let me give you a proof of the following: Let $\mathbb{N}$ be the natural numbers, $\mathbb{N}=\{1,2,3,4,5,\ldots\}$, and let $2^{\mathbb{N}}$ be the set of all binary sequences (functions from $\mathbb{N}$ to $\{0,1\}$, which can be viewed as "infinite tuples" where each entry is either $0$ or $1$). If $f\colon\mathbb{N}\to 2^{\mathbb{N}}$ is a function, then $f$ is not surjective. That is, there exists some binary sequence $s_f$, which depends on $f$, such that $f(n)\neq s_f$ for all natural numbers $n$. What I denote $2^{\mathbb{N}}$ is what Wikipedia calls $T$. I will represent elements of $2^{\mathbb{N}}$ as tuples, $$(a_1,a_2,a_3,\ldots,a_n,\ldots)$$ where each $a_i$ is either $0$ or $1$; these tuples are infinite; we think of the tuple as defining a function whose value at $n$ is $a_n$, so it really corresponds to a function $\mathbb{N}\to\{0,1\}$. Two tuples are equal if and only if they are identical: that is, $$(a_1,a_2,a_3,\ldots,a_n,\ldots) = (b_1,b_2,b_3,\ldots,b_n,\ldots)\text{ if and only if } a_k=b_k\text{ for all }k.$$ Now, suppose that $f\colon\mathbb{N}\to 2^{\mathbb{N}}$ is a given function. For each natural number $n$, $f(n)$ is a tuple. Denote this tuple by $$f(n) = (a_{1n}, a_{2n}, a_{3n},\ldots,a_{kn},\ldots).$$ That is, $a_{ij}$ is the $i$th entry in $f(j)$. I want to show that this function is not surjective. To that end, I will construct an element of $2^{\mathbb{N}}$ that is not in the image of $f$. Call this tuple $s_f = (s_1,s_2,s_3,\ldots,s_n,\ldots)$. I will now say what $s_k$ is. Define $$s_k = \left\{\begin{array}{ll} 1 &amp;\mbox{if $a_{nn}=0$;}\\ 0 &amp;\mbox{if $a_{nn}=1$.} \end{array}\right.$$ This defines an element of $2^{\mathbb{N}}$, because it defines an infinite tuple of $0$s and $1$s; this element depends on the $f$ we start with: if we change the $f$, the resulting $s_f$ may change; that's fine. (This is the "diagonal element"). Now, the question is whether $s_f = f(n)$ for some $n$. The answer is "no." To see this, let $n\in\mathbb{N}$ be any natural number. Then $$f(n) = (a_{1n},a_{2n},a_{3n},\ldots,a_{nn},\ldots)$$ so the $n$th entry of $f(n)$ is $a_{nn}$. If the $n$th entry of $f(n)$ is $0$, then by construction the $n$th entry of $s_f$, $s_n$ is $1$, so $f(n)\neq s_f$. If the $n$th entry of $f(n)$ is $1$, then by construction the $n$th entry of $s_f$, $s_n$, is $0$. Then $f(n)\neq s_f$ again, because they don't agree on the $n$th entry. This means that for every $n\in\mathbb{N}$, $s_f$ cannot equal $f(n)$, because they differ in the $n$th entry. So $s_f$ is not in the image of $f$. What we have shown is that given a function $f\colon\mathbb{N}\to 2^{\mathbb{N}}$, there is some element of $2^{\mathbb{N}}$ that is not in the image of $f$. The element depends on what $f$ is, of course; different functions will have possibly different "witnesses" to the fact that they are not surjective. Think of the function $f$ being hauled before a judged and accused of Being Surjective; to prove its innocence, $f$ produces a witness to verify its alibi that it's not surjective; this witness is $s_f$, who can swear to the fact that $f$ is not surjective because $s_f$ demonstrates that $f$ is not surjective: $s_f$ is not in $\mathrm{Im}(f)$; if the police hauls in some other function $g$ and accuse that function of being surjective, $g$ will also have to produce a witness to verify its alibi that it isn't surjective; but that witness does not have to be the same witness that $f$ produced. The "witness" we produce will depend on who the "accused" is. The reason this is called the "diagonal argument" or the sequence $s_f$ the "diagonal element" is that just like one can represent a function $\mathbb{N}\to \{0,1\}$ as an infinite "tuple", so one can represent a function $\mathbb{N}\to 2^{\mathbb{N}}$ as an "infinite list", by listing the image of $1$, then the image of $2$, then the image of $3$, etc: $$\begin{align*} f(1) &amp;= (a_{11}, a_{21}, a_{31}, \ldots, a_{k1},\ldots )\\ f(2) &amp;= (a_{12}, a_{22}, a_{32}, \ldots, a_{k2},\ldots)\\ &amp;\vdots\\ f(m) &amp;= (a_{1m}, a_{2m}, a_{3m},\ldots, a_{km},\ldots) \end{align*}$$ and if one imagines the function this way, then the way we construct $s_f$ is by "going down the main diagonal", looking at $a_{11}$, $a_{22}$, $a_{33}$, etc. Now, remember the definition of "countable": Definition. A set $X$ is said to be countable if and only if there exists a function $f\colon\mathbb{N}\to X$ that is surjective. If no such function exists, then $X$ is said to be uncountable. That means that the theorem we proved above shows that: Theorem. The set of all binary sequences, $2^{\mathbb{N}}$, is not countable. Why? Because we showed that there are no surjective functions $\mathbb{N}\to 2^{\mathbb{N}}$, so it is not countable. How does this relate to the real numbers? The real numbers are bijectable with the set $2^{\mathbb{N}}$. That is, there is a function $H\colon 2^{\mathbb{N}}$ to $\mathbb{R}$ that is both one-to-one and onto. If we had a surjection $\mathbb{N}\to\mathbb{R}$, then composing this surjection with $H$ we would get a surjection from $\mathbb{N}$ to $2^{\mathbb{N}}$, and no such surjection exists. So there can be no surjection from $\mathbb{N}$ to $\mathbb{R}$, so $\mathbb{R}$ is not countable (that is, it is uncountable). Bijecting $\mathbb{R}$ with $2^{\mathbb{N}}$ is a bit tricky; you can first biject $\mathbb{R}$ with $[0,1]$; then you would want to use the binary representation (as in wikipedia's article), so that each sequence corresponds to a binary expansion, and each number in $[0,1]$ corresponds to a binary sequence (its digits when written in binary); the problem is that just like some numbers in decimal have two representations ($1$ and $0.999\ldots$ are equal), so do some numbers have two representations in binary (for example, $0.01$ and $0.0011111\ldots$ are equal). There is a way of fixing this problem, but it is a bit technical and may obscure the issue, so I would rather not get into it. Instead, let me note that the set $2^{\mathbb{N}}$ can be mapped in a one-to-one way into $(0,1)$: simply take a binary sequence $$(a_1,a_2,a_3,\ldots,a_n,\ldots)$$ and map it to the decimal number that has a $5$ in the $k$th decimal position if $a_k=0$, and has a $6$ in the $k$th decimal position if $a_k=1$. Using $5$ and $6$ ensures that each number has only one decimal representation, so the map is one-to-one. Call this map $h$. Define $H\colon\mathbb{R}\to 2^{\mathbb{N}}$ as follows: given a real number $x$, if $x$ is in the image of $h$, then define $H(x)$ to be the unique sequence $s$ such that $h(s)=x$. If $x$ is not in the image of $h$, then define $H(x)$ to be the sequence $(0,0,0,\ldots,0,\ldots)$. Notice that $H$ is surjective, because $h$ is defined on all of $2^{\mathbb{N}}$. This is enough to show that there can be no surjection from $\mathbb{N}$ to $\mathbb{R}$: suppose that $f\colon\mathbb{N}\to\mathbb{R}$ is any function. Then the function $H\circ f\colon \mathbb{N}\stackrel{f}{\to}\mathbb{R}\stackrel{H}{\to}2^{\mathbb{N}}$ is a function from $\mathbb{N}$ to $2^{\mathbb{N}}$. Since any function from $\mathbb{N}$ to $2^{\mathbb{N}}$ is not surjective, there is some $s\in 2^{\mathbb{N}}$ that is not in the image of $H\circ f$. Since $s$ is in the image of $H$, there exists some $x\in\mathbb{R}$ such that $H(x)=s$. That means that $f(n)\neq x$ for all $n$ (since $H\circ f(n)\neq s$). Since there can be no surjection from $\mathbb{N}$ to $\mathbb{R}$, that means that $\mathbb{R}$ is uncountable. So, as to your questions. First, you should understand that the diagonal argument is applied to a given list. You already have all of $s_1$, $s_2$, $s_3$, etc., in front of you. Nobody is allowed to change them. You construct the "diagonal number" (my $s_f$ above) on the basis of that list. Yes, if you change the list then you can put the diagonal number $s_f$ in the new list; but $s_f$ is only a witness to the fact that the original list was not a list of all sequences. If you change to a different list, then I will have to produce a different witness. The witnesses depend on the given list. You know that $s_4$ is not equal to $s_f$ because $s_f$ is constructed precisely so that it disagrees with $s_4$ in the $4$th position, and one disagreement is enough to guarantee inequality. Wikipedia's presentation seems to argue by contradiction; I don't like to introduce that into these discussions because the argument is difficult enough to "grok" without the added complication. (The "Otherwise..." part is an argument by contradiction, arguing that if you could 'list' the elements of $T$, then you would apply the argument to show that this 'complete list' is not 'complete', etc). There's no need. Simply, there is no surjection from $\mathbb{N}$ to $T$, as discussed above. Now, there is a common "first reaction" that this argument would apply "just as well" to the natural numbers: take a list of natural numbers listed in binary, and engineer an argument like the diagonal argument (say, by "reflecting them about the decimal point", so they go off with a tail of zeros to the right; or by writing them from left to right, with least significant digit first, instead of last) to produce a "number" not on the list. You can do that, but the problem is that natural numbers only corresponds to sequences that end with a tail of $0$s, and trying to do the diagonal argument will necessarily product a number that does not have a tail of $0$s, so that it cannot represent a natural number. The reason the diagonal argument works with binary sequences is that $s_f$ is certainly a binary sequence, as there are no restrictions on the binary sequences we are considering. I hope this helps.
Several things that are taught, or implied by the way it is taught, about Cantor's Diagonalization are not correct. And they lead to misunderstandings like this one. 1) "Cantor wanted to prove that the real numbers are countable." No. Cantor wanted to prove that if we accept the existence of infinite sets, then the come in different sizes that he called "cardinality." 2) "Diagonalization was his first proof." No. His first proof was published 17 years earlier. 3) "The proof is about real numbers." No. The real numbers were the example he used in the first proof, but some other mathematicians objected to assumptions he made about them. So diagonalization explicitly did not. It used what I call Cantor Strings: infinite length strings of the characters "m" and "w." A similar proof can be used on real numbers, but it requires more steps that are often left out. 4) "It is a formal proof." No. It was an informal explanation of a more formal version, about power sets. 5) "Diagonalization is a proof by coontradiction." No, no, no. No. It is a proof by contraposition, which is similar but not the same. By proving "If A, then B" you also prove "If not B, then not A." What diagonalization proves, is "If S is an infinite set of Cantor Strings that can be put into a 1:1 correspondence with the positive integers, then there is a Cantor string that is not in S." The contrapositive of this is "If there are no Cantor Strings that are not in the infinite set S, then S cannot be put into a 1:1 correspondence with the positive integers."
Solve the following parametric system of three equations and three unknowns This is my first time posting here so I'm exciting to join the community and gain as much knowledge as possible. My algebra is quite lacking and I'm a first year Physicist. Please could you help me solve the following three equations to find $I_3$ in terms of only $R$s and $\xi$ (i.e. no $I_1$ or $I_2$ terms). Thank you! $$\begin{eqnarray} \xi_1 - I_1 R_1 - (I_1 - I_2) R_4 &amp;=&amp; 0\\ - I_2 R_2 - (I_2 - I_3) R_5 - (I_2 - I_1) R_4 &amp;=&amp; 0\\ - I_3 R_3- I_3 R_6 - (I_3 - I_2) R_5 &amp;=&amp; 0 \end{eqnarray}$$
So far, I get "augmented matrix" $$ \left( \begin{array}{ccc|c} R_1 + R_4 &amp; - R_4 &amp; 0 &amp; \xi_1 \\ -R_4 &amp; R_2 + R_4 + R_5 &amp; - R_5 &amp; 0 \\ 0 &amp; - R_5 &amp; R_3 + R_5 + R_6 &amp; 0 \end{array} \right) $$ OOH, symmetric square part
I calculated the solution with Maxima (%i1) list_of_equations:[xi1-I1*R1-(I1-I2)*R4=0, -I2*R2-(I2-I3)*R5-(I2-I1)*R4=0,-I3*R3-I3*R6-(I3-I2)*R5=0]; (%o1) [(I2 - I1) R4 - I1 R1 + xi1 = 0, - (I2 - I3) R5 + (I1 - I2) R4 - I2 R2 = 0, - I3 R6 + (I2 - I3) R5 - I3 R3 = 0] (%i2) list_of_unknowns:[I1,I2,I3]; (%o2) [I1, I2, I3] (%i3) list_of_solutions:solve(list_of_equations,list_of_unknowns); (%o3) [[I1 = (xi1 (R5 (R6 + R3 + R2) + R2 (R6 + R3)) + xi1 R4 (R6 + R5 + R3)) /(R4 (R1 (R6 + R5 + R3) + R5 (R6 + R3 + R2) + R2 (R6 + R3)) + R1 (R5 (R6 + R3 + R2) + R2 (R6 + R3))), I2 = (xi1 R4 (R6 + R5 + R3))/(R4 (R1 (R6 + R5 + R3) + R5 (R6 + R3 + R2) + R2 (R6 + R3)) + R1 (R5 (R6 + R3 + R2) + R2 (R6 + R3))), I3 = (xi1 R4 R5)/(R4 (R1 (R6 + R5 + R3) + R5 (R6 + R3 + R2) + R2 (R6 + R3)) + R1 (R5 (R6 + R3 + R2) + R2 (R6 + R3)))]]
Orienting Variables Matrix For the system of equations above, if I were to use matrices to solve for each of the variables, how is the variables matrix ordered? If all the variables in the equations are aligned vertically (as they already are), if I look at one of the equations, do the variables from the left to right correspond to the variables matrix's up and down? In other words, would the variables matrix then be ordered as such: the top variable is z, the middle variable is x, and the bottom variable is y? What is the general rule for orienting the variable matrix, because I can't seem to find one online?
Usually we choose the alphabetical order $x,y,z$ but of course we can also use any others order and in particular $z,x,y$. In the latter case the matrix form will be $$\begin{bmatrix}2&amp;1&amp;-1\\-1&amp;2&amp;1\\-3&amp;-1&amp;2\end{bmatrix}\begin{bmatrix}z\\x\\y\end{bmatrix}=\begin{bmatrix}-3\\0\\7\end{bmatrix}$$ which is completely equivalent to $$\begin{bmatrix}1&amp;-1&amp;2\\2&amp;1&amp;-1\\-1&amp;2&amp;-3\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}-3\\0\\7\end{bmatrix}$$
Usually we choose the alphabetical order $x,y,z$ but of course we can also use any others order and in particular $z,x,y$. In the latter case the matrix form will be $$\begin{bmatrix}2&amp;1&amp;-1\\-1&amp;2&amp;1\\-3&amp;-1&amp;2\end{bmatrix}\begin{bmatrix}z\\x\\y\end{bmatrix}=\begin{bmatrix}-3\\0\\7\end{bmatrix}$$ which is completely equivalent to $$\begin{bmatrix}1&amp;-1&amp;2\\2&amp;1&amp;-1\\-1&amp;2&amp;-3\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}-3\\0\\7\end{bmatrix}$$
Finding the arc length of the parabola $y=x^2 \; from \; (0,0)\;to\;(1,1)$ As the title says, I need to find the arc length of that. This is what I have so far (I'm mostly stuck on the integration part): $${dy\over dx}=2x \Rightarrow L=\int_0^1 \sqrt{1+(2x)^2}dx$$ Substitute $$x=\tan\theta, \qquad dx=\sec^2\theta\,d\theta ,$$ giving $$\int_0^1 \sqrt{1+(2\tan\theta)^2}\sec^2\theta\,d\theta=\int_0^1 \sqrt{1+4\tan^2\theta}\sec^2\theta\,d\theta$$ That is where I'm stuck. Any help is appreciated, thank you.
Let $2x = \tan\theta$ instead. Then, the integral becomes $\displaystyle \int_0^{\arctan 2} \sqrt{1+\tan^2 \theta} \cdot \dfrac14\sec^2\theta \ \mathrm d\theta$ which is equal to $\displaystyle \frac14 \int_0^{\arctan2} \sec^3\theta \ \mathrm d\theta$.
You can make the substitution $2x=sh(t)$ it is simple ot obtain the result.
Fastest way to multiply numbers mentally? I'm wondering what is the fastest way to multiply numbers? For now, let's focus on 2-digit numbers and were one cannot use scrap paper. I've come across 3 fast methods: 64x43 1) 60x43 + 4x43 (note that 60x43 is actually a 1-dig. times 2-dig problem) 2580 + 172 = 2752 2) 64x43 = 60x40 + 4x3 =2412 + 4x40 +3x60 = 340 =2752 3) 64 x43 =----- (4x3) (4 from 64) 12 (6x3)+(4x4) (cross multiply) 34 (6x4) 24 ------------+ 2752 *I am aware to some multiplications can be done faster with specific tricks, for example if we square a number, if one of the numbers ends with 5 or is equal to 11, or if they both end in the same number, or if one is close to a multiple of 10, or if one is a product of two 1 digit numbers, etc. But my question regards the general case. Now practice makes perfect, but before I start training one method I would like to know which one is to be preferred. Is there someone who can say something useful about this? I guess method 3 is not ideal, since the speed of doing the cross multiplication is partly because of the way it is denoted in this example (one above the other). So can someone say whether method 1 or 2 is to be preferred in sense of speed/simplicity? Related: What is the fastest way to multiply two digit numbers?
What I usually do, which is probably not the fastest way, is try to look for patterns that I can resolve easily and split the numbers accordingly. For example, in your case, $$64\times43 = 64\times32+64\times11$$ I immediately recognize $64=2^6$ and $32=2^5$, and I know how to quickly multiply by $11$, so I get $$64\cdot43 = 2^{11}+640+64=2048+704 = 2752$$ While strictly speaking this is not the "general case", I find that every time I have to multiply two digit numbers, a trick like this is possible - and I find it to be the most fun way to do it. This is probably hard to scale to more than two digits though.
may be my article will help you. Here I have described how one can calculate multiplication of two digit numbers mentally. http://mathsequation.com/how-to-memorize-multiplication-table-fast/ 64*43= 64*40 +64*3 so basically problem breaks into multiplication of two digit number with 1 dgit like 64*4, 64*3 which can be done mentally easily ,I have explained that too. also once you calculate 64*40=2560 and 64*3=192 mentally. you can add them by my mental addition trick written here. http://mathsequation.com/mental-addition-trick-learn-within-5-minutes/ I have also explained some additional methods in special cases we can use.
Probability of getting 'k' heads with 'n' coins This is an interview question.( http://www.geeksforgeeks.org/directi-interview-set-1/) Given $n$ biased coins, with each coin giving heads with probability $P_i$, find the probability that on tossing the $n$ coins you will obtain exactly $k$ heads. You have to write the formula for this (i.e. the expression that would give $P (n, k)$). I can write a recurrence program for this, but how to write the general expression ?
You can use Dynamic Programming as $N$th turn's outcome is mutually independent to $N-1$th and there are two possible cases here : $K$ heads already came in $N-1$ turns $K-1$ heads already came in $N-1$ turns $dp[i][j]$ : probability of getting $j$ heads in $i$ trials. So, $dp[n][k] = dp[n - 1][k]\cdot (1 - P[n]) + dp[n - 1][k - 1]\cdot p[n]$
Let F(i,j) be the probability of finding exactly j heads when i biased coins are tossed. and P(i) be the probability of landing head when ith coin is tossed. The recurrence relation for this problem would look like following : F(i, j) = F(i-1, j) * (1-P(i)) + F(i-1, j-1) * P(i) With the base cases being as written here :- F(1,0) = 1-P(1) and F(1,1) = P(1) So now the problem reduces to finding value F(n, k) which would give us the probability of finding exactly k heads in n tosses. this can be extended to find at most k heads or atleast k heads also where we have to find the sum of solutions for all j in [0,k] and [k,n] respectively.
Can one work with any classes of numbers in a proof of number theory? Can one work with any classes of numbers, like natural, integer, rational, real and complex, in a proof of number theory, as long as the result tells something about the integers ? Or should the result be proven using integer-operations only (we can only divide one integer by another if the quotient is an integer, because otherwise we fall out of the default domain) ? To be more specific, should we stay inside the group of integers when proving results about the integers ? If not, we are proving the result in a larger group containing the integers as a subgroup ?
Why on earth would you impose such restrictions? There's only one thing you should require of a proof: that it be logically valid. The end. Not only is there absolutely no reason to apply a restriction like that, but modern number theory very frequently steps outside the realm of ordinary integers. Analaytic number theory is roughly speaking the application of calculus to number theory, while algebraic number theory uses systems of numbers that can include complex numbers (for example, the Gaussian integers $a+bi$ where $a,b$ are integers). I've often heard of something called ergodic theory used in contemporary number theory, although I have no idea how or even what exactly that is.
$$\text{Yes, only natural numbers!}$$ Says Kronecker.
Showing the countable direct product of $\mathbb{Z}$ is not projective I am trying to prove that the direct product $M = \mathbb{Z} \times \mathbb{Z}\times \cdots$ is not a projective $\mathbb Z$-module and I am stuck near the end of the proof because of the authors use of the word infinitely divisible. I will list the sketch of the proof and try to explain where I am stuck. We proceed by contradiction so suppose $M$ is contained in some free module $F$ with basis $B$. Set $N = \mathbb{Z} \oplus \mathbb{Z} \oplus\cdots$ and observe that $N$ is a submodule of $M$. Since $N \subset F$ there exists $B' \subset B$ such that $B'$ is a basis for $N$ and consider the free module $F' \subset F$ determined by $B'$. Notice $F'+M \subset F$ gives $M/(M \cap F') \cong (F'+M)/F' \subset F/F'$ so we have $M/(M \cap F') $ The next step in the proof requires to consider sequences of signs so let $s = (s_1, s_2, \ldots)$ be a sequence of plus and minus signs and consider an element $m_s := (s_1 , 2 s_2, \ldots , k! s_k, \ldots) \in M$ The next point in the proof is what I don't understand, the notes I am using say $m_s +(M \cap F')$ is infinitely divisible in $F/F'$ and use this to show $M$ cannot be contained in any free $Z$ module. My question is How do we show $m_s +(M \cap F')$ is infinitely divisible in $F/F'$ and how do translate the word infinitely divisible into definitions from Hungerford or Dummite and Foote?
First note that if $\mathbb{Z}^{\mathbb{N}}$ is free it has a basis $\beta$ with cardinality non-countable $\kappa$, consider as sets, then the cardinality of $Hom(\beta, \mathbb{Z})$ is $2^{\kappa}&gt;\kappa$. Now the final argument is that if you take morphism $f$ from $\mathbb{Z}^{\mathbb{N}}$ in to $\mathbb{Z}$ and you compose it wiht the inclusion in the jth coordinate $i_j$ , $fi_j=0$ for almost all $j\in\mathbb{N}$. That is for almost all elements in the basis the morphism is zero, from here the cardinality of $Hom(\mathbb{Z}^{\mathbb{N}},\mathbb{Z})$ is $\kappa$, contradiction!!!!!, $Hom(\mathbb{Z}^{\mathbb{N}},\mathbb{Z})$, $ Hom(\beta, \mathbb{Z})$ must have the same cardinality.
First note that if $\mathbb{Z}^{\mathbb{N}}$ is free it has a basis $\beta$ with cardinality non-countable $\kappa$, consider as sets, then the cardinality of $Hom(\beta, \mathbb{Z})$ is $2^{\kappa}&gt;\kappa$. Now the final argument is that if you take morphism $f$ from $\mathbb{Z}^{\mathbb{N}}$ in to $\mathbb{Z}$ and you compose it wiht the inclusion in the jth coordinate $i_j$ , $fi_j=0$ for almost all $j\in\mathbb{N}$. That is for almost all elements in the basis the morphism is zero, from here the cardinality of $Hom(\mathbb{Z}^{\mathbb{N}},\mathbb{Z})$ is $\kappa$, contradiction!!!!!, $Hom(\mathbb{Z}^{\mathbb{N}},\mathbb{Z})$, $ Hom(\beta, \mathbb{Z})$ must have the same cardinality.
Why is multiplication axiomatized? When we learned exponentiation, we first introduced it as an iterated multiplication (for integer powers), then using nth root we defined it for rational powers, and finally we took the limit to define it for arbitrary real power. My question: why can't we do the same with multiplication? Why is multiplication defined with axioms, rather than as an iterated addition?
My understanding is that standard propositional logic, which has axioms for multiplication and addition, is complicated enough to be undecidable etc. (Gödel's theorems). However, an axiom set that uses only addition is decidable and is thus much less powerful. I don't understand why using multiplication is sufficient to get the full power of mathematics while addition is not sufficient, but that is the case logically. Perhaps I should state explicitly that I am talking about the natural numbers, not the real numbers, but I think that the point of the power of multiplication (excuse the pun) is relevant.
My understanding is that standard propositional logic, which has axioms for multiplication and addition, is complicated enough to be undecidable etc. (Gödel's theorems). However, an axiom set that uses only addition is decidable and is thus much less powerful. I don't understand why using multiplication is sufficient to get the full power of mathematics while addition is not sufficient, but that is the case logically. Perhaps I should state explicitly that I am talking about the natural numbers, not the real numbers, but I think that the point of the power of multiplication (excuse the pun) is relevant.
How to find the total distance traveled, given the position function? A particle moves in a straight line according to the rule $x(t)=t^3-2t+5$, where $x(t)$ is given in meters and where $t$ is given in seconds. Determine the position, velocity, and acceleration of the particle at $t=0$ and $t=3$ seconds. How far has the particle moved during this $3$ second period? Answer \begin{align*}x(t)&amp;=t^3-2t+5&amp;x(0)&amp;=5\,m&amp;x(3)&amp;=26\,m\\ v(t)&amp;=3t^2-2&amp;v(0)&amp;=-2\,m/s&amp;v(3)&amp;=25\,m/s\\ a(t)&amp;=6t&amp;a(0)&amp;=0&amp;a(3)&amp;=18\,m/s^2\end{align*} Total distance traveled is $23.18$m. My question concerns the total distance traveled. I know by definition distance is the total displacement (the net total distance, regardless of direction). But how do you get $23.18$ m from the equations?
You should integrate the absolute value of velocity from 0 to 3. Than you get the desired result.
Basically a particle will be moving in negative direction if its velocity is negative.As this type of motion is a straight line motion where $x$ is in terms of $t$ therefore total distance travelled =(distance travelled in $+v_e$ direction)+(mod of distance travelled in $-v_e$ direction).... To solve for total distance travelled: 1.Find velocity vector by differentiating $x$ vector. 2.Find time intervals contained in the given time intervals where $v$ is $-v_e$ 3.Integrate $v$ for time interval in which $v$ is $+v_e$ and add a '$-$' sign to those time time interval in which $v$ is $-v_e$ then integrate it for respective time in which $v$ is $-v_e$ 4.Finally add the integrated values...
Seems like geometric programming, except equality constraints I have an optimization problem, which is quite similar to geometric programming, except that in equality constraints, I have posynomials instead of monomials. Is there any way to change it in to the form of GP? My idea was to express $p(X)=1$ as $(1-\epsilon)\leq p(X)\leq 1$, and I was so happy that it leads me to an answer, but unfortunately I cannot handle the left-hand side inequality. Any help?
My group calls this a "Signomial Equality", and we've implemented a couple useful heuristics for solving this in http://gpkit.rtfd.org; it's also discussed in a 2013 paper by Xu et. al.
My group calls this a "Signomial Equality", and we've implemented a couple useful heuristics for solving this in http://gpkit.rtfd.org; it's also discussed in a 2013 paper by Xu et. al.
changing the order of the logical symbols $(\forall \epsilon) (\exists\delta)$ by $(\exists \delta)(\forall\epsilon) $ in limit definition Some time ago a professor told the class, which I was in, to analyze why this definition of limit is not good (or if it is a good definition to argument why): There exists a $\delta&gt;0$ for all $\epsilon&gt;0$ such that, for all x, whenever $ 0&lt;|x-a|&lt;\delta$ then $|f(x) - L|&lt;\epsilon$ Or, in other words, changing the order of the logical symbols $(\forall \epsilon) (\exists\delta)$ by $(\exists \delta)(\forall\epsilon) $ in the epsilon-delta limit definition implies a bad functioning definition of limit? Recently this question has returned from the universe of non answered question and it has brought another doubt. Is there some counterexample that can bring a contradiction to this bad definition? When I try to respond this question I can't find a counterexample since everything I think is linked with the original limit definition. I am still trying to find sufficient small $\delta$ for a arbitrary small $\epsilon$. But when I enunciate the above definition, intuitively something sounds different. Can anyone help?
Let's analyse this logically, because what else do we do in mathematics? There exists $\delta$ for all $\epsilon$ This entails that for some $\delta$ we have that it is independed of our $\epsilon$. That means if $\epsilon=1$ or $\epsilon=10^8$ we have the same $\delta$, which is bad because it can be suddenly at any difference. For example $$f(x)=\sin x$$ we can have $\epsilon=2$ and clearly there exists a $\delta$ satesfying the criteria but equally clearly is it that it does encompasses all all points, so if we let $x\to 0$ we have it converges to all values in $[-1,1]$. The key here is that $(\exists\delta)(\forall\epsilon)$ implies that if $\delta$ is thought of as a function of $\epsilon$ then $\delta(\epsilon)=C$, that is with respect to $\epsilon$ it's constant for that given value.
As the $\epsilon$ bound is independent of the $\delta$, for any $\epsilon&gt;0$, $$|f(x)-L|&lt;\epsilon$$ therefore $f(x)=L$ for all $0&lt;|x-a|&lt;\delta$. That is, $f(x)$ is a constant function on $0&lt;|x-a|&lt;\delta$.
Limit of sum of terms containing binomial coefficients $$\lim_{n \to \infty} \sum_{k=0}^n \frac{n \choose k}{k2^n+n}$$ The result is $0$. The $n$ from the denominator can be ignored. If not for the $k$ at the denominator, the result would be $1$, but I can not find the right inequality.
$$\begin{eqnarray*} \sum_{k=0}^{n}\frac{\binom{n}{k}}{k 2^n+n}\leq \frac{1}{n}+\frac{1}{2^n}\sum_{k=1}^{n}\binom{n}{k}\frac{1}{k}&amp;\stackrel{CS}{\leq}&amp;\frac{1}{n}+\frac{1}{2^n}\sqrt{\zeta(2)\cdot\binom{2n}{n}}\\&amp;\leq&amp;\frac{1}{n}+\frac{1}{2^n}\sqrt{\frac{\zeta(2)\cdot 4^n}{\sqrt{\pi n}}}\\&amp;=&amp;O\left(n^{-1/4}\right).\end{eqnarray*}$$ "CS" stands for the Cauchy-Schwarz inequality and the inequality $$ \sum_{k=0}^{n}\binom{n}{k}^2 = \binom{2n}{n}\leq\frac{4^n}{\sqrt{\pi n}} $$ is well-known. It comes from the usual manipulations about the Wallis product.
From this P. Stanica paper the approximation \begin{align} \binom{n}{k} \sim \sqrt{\frac{2}{n \, \pi}} \, 2^{n} \, e^{-\frac{(n-2k)^{2}}{2n}} \end{align} is given. Utilizing this result then the desired limit is as follows: \begin{align} \lim_{n \to \infty} \, \sum_{k=0}^{n} \frac{n \choose k}{k2^n+n} &amp;= \lim_{n \to \infty} \, \sqrt{\frac{2}{n \, \pi}} \, \sum_{k=0}^{n} \frac{1}{k+ \frac{n}{2^{n}}} \, e^{-\frac{(n-2k)^{2}}{2n}} \to 0. \end{align}
Proofs of AM-GM inequality The arithmetic - geometric mean inequality states that $$\frac{x_1+ \ldots + x_n}{n} \geq \sqrt[n]{x_1 \cdots x_n}$$ I'm looking for some original proofs of this inequality. I can find the usual proofs on the internet but I was wondering if someone knew a proof that is unexpected in some way. e.g. can you link the theorem to some famous theorem, can you find a non-trivial geometric proof (I can find some of those), proofs that use theory that doesn't link to this inequality at first sight (e.g. differential equations …)? Induction, backward induction, use of Jensen inequality, swapping terms, use of Lagrange multiplier, proof using thermodynamics (yeah, I know, it's rather some physical argument that this theorem might be true, not really a proof), convexity, … are some of the proofs I know.
Pólya's Proof: Let $f(x) = e^{x-1}-x$. The first derivative is $f'(x)=e^{x-1}-1$ and the second derivative is $f''(x) = e^{x-1}$. $f$ is convex everywhere because $f''(x) &gt; 0$, and has a minimum at $x=1$. Therefore $x \le e^{x-1}$ for all $x$, and the equation is only equal when $x=1$. Using this inequality we get $$\frac{x_1}{a} \frac{x_2}{a} \cdots \frac{x_n}{a} \le e^{\frac{x_1}{a}-1} e^{\frac{x_2}{a}-1} \cdots e^{\frac{x_n}{a}-1}$$ with $a$ being the arithmetic mean. The right side simplifies $$\exp \left(\frac{x_1}{a} -1 \ +\frac{x_1}{a} -1 \ + \cdots + \frac{x_n}{a} -1 \right)$$ $$=\exp \left(\frac{x_1 + x_2 + \cdots + x_n}{a} - n \right) = \exp(n - n) = e^0 = 1$$ Going back to the first inequality $$\frac{x_1x_2\cdots x_n}{a^n} \le 1$$ So we end with $$\sqrt[n]{x_1x_2\cdots x_n} \le a$$
My idea is to use only the Cauchy Schwartz inequality by induction. Applying the Cauchy_Schwartz inequality on the vectors $(\sqrt{a},\sqrt{a})^T$ and $(\sqrt{b},\sqrt{b})^T$ we get: $$\sqrt{ab} +\sqrt{ab} \le (a+b)^{1/2}(a+b)^{1/2} $$ that is $$\sqrt{ab} \le \frac{a+b}{2}~~~~~~\color{red}{\text{AM-GM for $n =2$}}$$ Hypothesis of induction: Assume that for all $p\in\{1,2,\cdots, n-1\}$ we have, \begin{align*} \color{blue}{(a_1a_2\cdots a_p)^{1/p}\leq \frac{a_1+a_2+\cdots+a_p}{p}} \end{align*} We want to prove that it is true for $n$. we will discuss two case: $n =2p$ and $n=2p-1$ If $n =2p$ Then we proceed as follows: Using that case: $n=2$ and the case $n=p$, we have, $$ \begin{align}\color{blue}{\frac{a_1+a_2+\cdots+a_n}{n} }&amp;= \frac{1}{2}\left(\overbrace{\frac{a_1+a_2+\cdots+a_p}{p}}^a+\overbrace{\frac{a_{p+1}+a_{p+2}+\cdots+a_n}{p}}^b \right) \\ &amp;\ge\sqrt{\left(\frac{a_1+a_2+\cdots+a_p}{p}\right) \left(\frac{a_{p+1}+a_{p+2}+\cdots+a_n}{p}\right)}\\ &amp;\ge\sqrt{(a_1a_2\cdots a_p)^{1/p}(a_{p+1}a_{p+2}\cdots a_n)^{1/p}}\\ &amp;=\sqrt{(a_1a_2\cdots a_n)^{1/p}} = \color{blue}{(a_{p+1}a_{p+2}\cdots a_n)^{1/n} } ~~~~~~\color{red}{\text{AM-GM for $n =2p$}} \end{align}$$ If $n =2p-1$ (we use the case $n=2p$, as follows.we have, $n+1 =2p $ and $p&lt;n$ : Then taking $$ a_{n+1} = \frac{a_1+a_2+\cdots+a_n}{n}$$ in the previous case we obtain, \begin{align}&amp;\frac{a_1+a_2+\cdots+a_{n+1}}{n+1} \ge (a_1a_2\cdots a_{n+1})^{\frac{1}{n+1}}\\ &amp;\Longleftrightarrow\frac{1}{n+1}\left(a_1+a_2+\cdots+a_n+\overbrace{\frac{a_1+a_2+\cdots+a_n}{n}}^{a_{n+1}} \right) \ge \left(a_1a_2\cdots a_{n}\left(\overbrace{\frac{a_1+a_2+\cdots+a_n}{n}}^{a_{n+1}}\right) \right)^{\frac{1}{n+1}}\\ \\&amp;\Longleftrightarrow\frac{a_1+a_2+\cdots+a_n}{n} \ge \left(a_1a_2\cdots a_{n}\left(\frac{a_1+a_2+\cdots+a_n}{n}\right) \right)^{\frac{1}{n+1}}\\ &amp;\Longleftrightarrow\left(\frac{a_1+a_2+\cdots+a_n}{n}\right)^{\frac{n}{n+1}} \ge \left(a_1a_2\cdots a_{n}\right)^{\frac{1}{n+1}} \\&amp;\Longleftrightarrow \color{blue}{\frac{a_1+a_2+\cdots+a_n}{n} \ge \left(a_1a_2\cdots a_{n}\right)^{\frac{1}{n}}}~~~~\color{red}{\text{AM-GM for $n =2p-1$}}\end{align}
Dirichlet characters mod $q$ of conductor $f_\chi|d|q$ Let $\hat{G}_q$ be the set of Dirichlet characters modulo $q$. If $d|q$, how many $\chi\in\hat{G}_q$ are there with $f_\chi|d$? ($f_\chi$ denotes the conductor of $\chi$) And what if we count the $\chi\in\hat{G}_q$ with $f_\chi|d$ and $\chi(-1)=\pm1$? I expect the number to be $\frac12\phi(d)$ but don't know how to prove that. Thanks alot!
There are $\phi(q)$ Dirichlet characters modulo $q$ If $\chi$ is a Dirichlet character modulo $d | q$ then $\tilde{\chi}(n)=\chi(n) 1_{gcd(n,q)=1}$ is a Dirichlet character modulo $q$. Let $f(q)$ be the number of primitive Dirichlet characters modulo $q$ then $$\phi(q) =\sum_{d | q} f(d) \quad \\ \implies \quad f(q) = \sum_{d | q} \mu(d) \phi(q/d) = \prod_{p^k \| q} f(p^k)= \prod_{p^k \| q} (\phi(p^k)-\phi(p^{k-1}))$$ For $q &gt; 2$ : the map $\{ \chi \bmod q \} \to \{-1,1\}$, $\chi \mapsto \chi(-1)$ is a surjective group morphism with kernel the Dirichlet characters with $\chi(-1) = 1$. Thus there are $\phi_1(q) = \frac{\phi(q)}{2}$ Dirichlet characters modulo $q$ with $\chi(-1)= 1$ (excepted the special case $\phi_1(1)= \phi_1(2) =1$). Also $\tilde{\chi}(-1) = \chi(-1)$ thus again (with $f_1(q)$ the number of primitive characters modulo $q$ with $\chi(-1)=1$) $$\phi_1(q) =\sum_{d |q} f_1(q) \implies f_1(q) = \sum_{d | q} \mu(d)\phi_1(q/d) = \frac{\mu(q)+\mu(q/2)+f(q)}{2}$$ and hence $$f_{-1}(q) = f(q)-f_1(q) = \frac{-\mu(q)-\mu(q/2)+f(q)}{2}$$
Ok, maybe I got it by myself: Let us define $\hat{P}_{d,\pm}:=\{\text{primitive Dirichlet characters $\tilde{\chi}$ with conductor dividing }d\text{ and }\tilde{\chi}(-1)=\pm1\}$ and $\hat{G}_{d,\pm}=\{\chi\in\hat{G}_q:f_\chi|d\text{ and }\chi(-1)=\pm1\}$. Let $\varphi:\hat{G}_{d,\pm}\longrightarrow \hat{P}_{d,\pm}$, $\chi\mapsto\chi'$, $\chi'$ the primitive character inducing $\chi$. If $\tilde{\chi}\in\hat{P}_{d,\pm}$, set $\chi(n)=\tilde{\chi}(n)$ for all $(n,q)=1$ and $0$ otherwise. Thus, $\chi\in\hat{G}_q$ and $\tilde{\chi}$ induces $\chi$. So $\varphi(\chi)=\chi'$ and $\tilde{\chi}$ both are primitive inducing $\chi$, therefore $\varphi(\chi)=\tilde{\chi}$. But as $f_\chi=f_{\chi'}=f_{\tilde{\chi}}$ and $\chi(-1)=\chi(q-1)=\tilde{\chi}(q-1)=\pm1$ we got $\chi\in\hat{G}_{d,\pm}$. The uniqueness of $\chi$ is obvious. So we got $\hat{G}_{d,\pm}\cong\hat{P}_{d,\pm}$. Now, as $\hat{P}_d\cong\hat{G}_d$ it follows easily that $|\hat{P}_{d,\pm}|=\frac12\phi(d)$.
Finding $\lim\limits_{x\to 0} \frac{a^x-1}{x}$ without L'Hopital and series expansion. So we have to find $\lim\limits_{x\to 0} \frac{a^x-1}{x}$ without using any series expansions or the L'Hopital's rule. I did it using both but I have no idea how to do it. I tried many substitutions but nothing worked. Please point me in the right direction.
In order to evaluate the limit $$\lim_{x \to 0}\frac{a^{x} - 1}{x}\tag{1}$$ for $a &gt; 0$, it is necessary to have a proper definition of $a^{x}$. This has been pointed out in one of the comments to the question, but OP has perhaps not understood the necessity of such a comment. In case you want to evaluate the limit $$\lim_{x \to 1}\frac{\sqrt{x} - 1}{x - 1}$$ it is necessary to have a definition of the symbol $\sqrt{x}$ and it is easy to define it by saying that $\sqrt{x}$ is a non-negative real number which gives $x$ when squared. The unfortunate part here is that there is no definition of $a^{x}$ which is as simple as the definition of $\sqrt{x}$ and most introductory calculus texts try their best not to define this symbol in proper manner. The real sin is committed when they are able to convince the students that there is no necessity of such a definition. It is one thing to avoid a complicated definition but totally another matter to convince that such a thing does not exist. There are multiple approaches to define $a^{x}$ and almost all of them are hard enough for a beginner in calculus (see these posts here in case you are interested). I prefer the following approach which is suitable for a beginner in calculus: Theorem: For a given real number $a &gt; 0$ there exists a unique function $F(x)$ defined and continuous for all real $x$ such that $F(0) = 1, F(1) = a$ and $F(x + y) = F(x)F(y)$ for all $x, y$. Moreover if $x$ is a rational number (say equal to $r \in \mathbb{Q}$) then $F(x) = F(r) = a^{r}$. This function is called the general exponential function and denoted by symbol $a^{x}$. And then we come to the limit in question: Theorem: For any real number $a &gt; 0$ the limit $$\lim_{x \to 0}\frac{a^{x} - 1}{x}$$ exists and hence defines a function of $a$ (say $L(a)$) and this function $L(a)$ satisfies the following properties: $$L(1) = 0, L(xy) = L(x) + L(y)$$ Further $L(x)$ is continuous for all $x &gt; 0$ and $$\lim_{x \to 0}\frac{L(1 + x)}{x} = 1\tag{2}$$ The function $L$ is traditionally denoted by $\log x$ and is called the natural logarithm (or just the logarithm) of $x$. Essentially the above approach gives two standard limits $$\lim_{x \to 0}\frac{a^{x} - 1}{x} = \log a,\,\lim_{x \to 0}\frac{\log(1 + x)}{x} = 1$$ and defers their proof (to advanced courses). But it is more honest than supplying vague ideas and intuition about exponential and logarithmic functions which I consider more of a hand waving.
Convert it into standard limit as write $a^x$ as $e^{x\ln a}$ and $\lim_{x \to 0} \dfrac{e^{x-1}}{x}=1$
Approximation of a $L^1$ function by a dominated sequence of continuous functions Consider $\mathbb{T}=\{z\in\mathbb{C}\mid |z|=1\}$ and the Lebesgue measure on it. Denote by $L^1(\mathbb{T})$ the set of integrable functions on $\mathbb{T}$ and by $C(\mathbb{T})$ the set of continuous functions on $\mathbb{T}$. I wonder whether or not the following proposition is true. Proposition. Let $0\leq f\in L^1(\mathbb{T})$. Then there exist a sequence $\{g_n\}_{n\in\mathbb{N}}\subset C(\mathbb{T})$ and a function $g\in L^1(\mathbb{T})$ such that $g_n\to f$ a.e. $|g_n(t)|\leq g(t)$ a.e. for all $t\in\mathbb{T}$ and all $n\in\mathbb{N}$. Just to add some context, I came up with this question while trying to find an alternative proof for a Fejér's-like theorem. I tried the usual techniques (density of continuous function in $L^1(\mathbb{T})$, Luzin's Theorem, etc) but I didn't succeed. Thank you in advanced. MD
Yes, your proposition is true. By density of the continuous functions, there is a sequence of continuous $g_n$ with $g_n \to f$ in $L^1$. Replacing $g_n$ with $g_n^+$, we can assume $g_n \ge 0$. Passing to a subsequence, we can also assume $g_n \to f$ almost everywhere. Passing to a further subsequence, we can assume $\|g_n - f\|_{L^1} \le 2^{-n}$. (Actually, the first "pass to a subsequence" is unnecessary. As soon as $\sum_n \|g_n - f\|_{L^1} &lt; \infty$, a Borel-Cantelli argument ensures that $g_n \to f$ almost everywhere.) I claim $g_n$ is the desired sequence. It remains to construct the dominating function $g$. Let $h_n = (g_n - f)^+$. Then $h_n$ is measurable, $\|h_n\|_{L^1} \le \|g_n -f\|_{L^1} \le 2^{-n}$, and we have $h_n \ge 0$ and $g_n \le f + h_n$. Set $g = f + \sum_{n=1}^\infty h_n$. Now by monotone convergence $$\int \sum_{n=1}^\infty h_n = \sum_{n=1}^\infty \int h_n \le \sum_{n=1}^\infty 2^{-n} = 1$$ so $g$ is integrable. And for each $n$ we have $g_n \le f + h_n \le g$. This argument didn't use any topology. Indeed, it still goes through if we replace $\mathbb{T}$ by any measure space $(X,\mu)$, and replace $C(\mathbb{T})$ by any dense subset $E \subset L^1(X,\mu)$ which is closed under the "positive part" operation. Maybe that latter condition can be weakened even further.
If $f$ is continuous, the result is a consequence of Arzela-Ascoli theorem. Because the sequence is non-negative, there is a sequence of measurable functions in $L^1$ that converges to $f$. Besides, thanks to the density of the continuous functions in $L^1$, we can assume that this functions are continuous. Let $F$ be such sequence of functions, i.e. $F=\{f_{n}:n\in \mathbb{N} \}⊂C(\mathbb{T})$. Any sub-sequence of functions in $F$ converges in $F\cup\{f\}⊂C(\mathbb{T})$. Arzela-Ascoli implies that $F$ is uniformly bounded, i.e. that the sequence satifies 2. For the general case, since $f$ is $L^1$, you can take a sequence of continuous functions converging to $f$. You can prove that there is a bound that holds for any case.
Reducing 12 modulo 9 This example is from a text on Modular Arithmetic We want to reduce 12 modulo 9. Here are several results which are correct according to the definition: 1) 12 ≡ 3 mod 9, 3 is a valid remainder since 9|(12−3) 2) 12 ≡ 21 mod 9, 21 is a valid remainder since 9|(21−3) 3) 12 ≡ −6 mod 9, −6 is a valid remainder since 9|(−6−3) I understand the first one. For a ≡ r mod m, we can write m | (a - r) So in the first example, a = 12 r = 3 m = 9. So from that we get 9 | (12 - 3) But for the second, a similar substitution would be 9 | (12 - 21) - But in the example, they have (21 - 3) instead of (12 - 21) Likewise in the $3^{rd}$, they have (-6 -3) instead of (12 + 6) How do they arrive at this in the $2^{nd}$ &amp; $3^{rd}$ example?
If $a \equiv b \pmod m$ and $b \equiv c \pmod m$, then $a \equiv c \pmod m$. This is called transitivity and you should prove that it is true. For your second example: The author in your text uses $12 \equiv 3 \pmod 9$ and $3 \equiv 21 \pmod 9$, so $$ 12 \equiv 3 \equiv 21 \pmod 9.$$ Of course, you can also just check $9 \mid 12 - 21$, as you said.
I am answering my own question. I think the author is trying to find all answers which form an equivalence class for mod 9. 3, 21 &amp; -6 are all part of the equivalence class for mod 9. { ...-15, -6, 3, 12, 21, ... } is one of the 9 equivalence classes for mod 9. That's how he gets to (2) &amp; (3) from (1)
Checking understanding on proving uniqueness of identity and inverse elements of a group. Sorry for such a trivial question, but just wanted to check my understanding. When proving a statement, for example, that the inverse of a group element is unique (in elementary group theory) one starts by supposing that there exists two inverses $h$ and $k$ for a given element $g \in G$, where $G$ is some group with binary operation $\ast:G\times G \rightarrow G$, such that $h\ast g = g\ast h = e$ and $k\ast g = g\ast k = e$ where $e \in G$ is the unique identity for the group $G$. From this, one can show that $h=k$ in the following manner: $$ h=h\ast e =h\ast\left( g\ast k\right) = \left(h\ast g\right)\ast k = e\ast k =k$$ and as such $h=k$. Now, is the reason we can from this state that the inverse of an element $g\in G$ is unique because $h$ and $k$ were chosen arbitrarily, apart from the requirement that they are inverses of $g$, and as such, if we know the value of one of the inverses, say $h$, then we know that for any other value $k$ to be an inverse of $g$ it must be equivalent to the known inverse $h$, and thus $h$ is the unique inverse of $g$. Is this reasoning correct? Sorry for the wordiness of this question, but just wanted to check my understanding explicitly. Also, although I understand that, logically, I should have proven this first (but I'd already written out the inverses part before thinking of this, so apologies for that), but is the reasoning the same for arguing that the identity element of a group is unique? (i.e. If we assume that there are two identity elements $e,g \in G$ and subsequently show that $e=g$, then this implies that if we know the form of one of the identities, say $e$, then for any other value $g$ to be an identity of the group $G$ it must be equivalent to $e$ and thus the identity element of a group is unique).
Group axioms tell you that an identity element must exist, and also that every element has an inverse. They don't tell you that there's only one identity element, and they don't tell you that an element can have only one inverse: these are things that you have to prove. So, if for example you want to prove that there is only one identity element, suppose instead that there are more than one. If so, you can choose two of them that are not the same element: yet as the proof goes on you see that those elements are instead the same one, as you know. This means that supposing that there are lots of identity elements yields a contradiction: hence, since you can't have more than one of them, and since at least one has to exist because of the group axioms, you see that the only possible option is that there's exactly only one identity element. The same happens when you're trying to show that every element of the group has only one inverse.
Suppose $x\in G$ has two inverses $y$ and $z$ then $$zx=yx\Rightarrow (zx)y=(yx)y\Rightarrow z(xy)=y(xy)\Rightarrow ze=ye$$ So $y=z.$
Good book on abstract algebra I am an undergraduate student - just started to learn about group theory, ring theory and module theory. I am self studying from the book &quot;Basic Abstract Algebra&quot; by P.B.Bhattacharya, S.K.Jain, S.R.Nagpaul. Prof said that this book is written in an uncomplicated manner but i found the statements in it with more hints rather than complete proofs. What are your thoughts about this book? Our prof also suggested Abstract Algebra by Dummit and Foote - but i found it quite bulky. Please enlighten me with your thoughts as i am very much lost - which one to follow. Thank you.
Dummit and Foote. Its bulkiness means it'll serve you well for years. It's detailed, patient, logical, and comprehensive. Some of its later material isn't great in my opinion, like its treatment of homological algebra with basically no motivation or punchline, but few people make it there anyway. Its character theory is actually quite good too.
Lang's Algebra is a classic, or near classic, and there's no reason not to start consulting it, even though it is advanced. Meanwhile, to address the question, Fraleigh's A First Course in Abstract Algebra would surely not have been chosen by professor John Stallings, who was surely one of the best group theorists of his generation, as the textbook for his undergrad course at Berkeley, if it were anything but a lovely introduction to the subject. And that's exactly what I found it to be.
Integer solutions of an equation that is set to a number How many integer solutions for $a$ and $b$ in $(ab)/(a+b)=3600$? My attempt: $(ab)/(a+b)=3600$ = $ab=3600(a+b)$ = $ab=3600a+3600b$ =$ab=3600a=3600b$ Dividing $3600b$ on both sides =$a(1-3600)/3600$ I am not really sure if this is correct. I was just doing this for fun after I was reading this in some mathematical book. It just explained about a few things not much. Can someone please help me with this? I was trying this for a while now and wanted to know what it would look like.
HINT : Note that $$\begin{align}\frac{ab}{a+b}=3600&amp;\iff ab=3600(a+b)\ \ \text{and}\ \ a+b\not=0\\&amp;\iff (a-3600)(b-3600)=3600^2\ \ \text{and}\ \ a+b\not=0.\end{align}$$ Can you take it from here?
We get $ab=3600(a+b)$ or $-3600a+ab-3600b=0$. We can factor $-3600a+ab$ as $a(-3600+b)$. We get $$a(-3600+b)-3600b=0$$. Adding $3600^2$ to both sides gives $$a(-3600+b)+3600(3600)-3600b=3600^2$$ or equivalently, $$a(-3600+b)-3600(-3600)-3600b=3600^2$$. This factors as $a(-3600+b)-3600(-3600+b)$ or equivanlently, $$(a-3600)(-3600+b)=3600^2$$ . The prime factorization of $3600^2$ is $(2^4 \cdot 3^2 \cdot 5^2)^2$ or $2^8 \cdot 3^4 \cdot 5^4$. There are $225$ factors of $3600^2$. $(a-3600)(-3600+b)$ can each be factors of $3600^2$ and since there are $225$ factors, there are 225 integer solutions for positive $a$ and $b$. There are also $225$ integer solutions for negative $a$ and $b$. Now we must deal with the condition where $a+b \neq 0$. We can let $a=-b$ and plug this into our equation to find our wrong solutions. We get $$(-b-3600)(b-3600)=3600^2$$. $(-b-3600)$ must equal $\pm 3600$. $3600$ factors as $2^4 \cdot 3^2 \cdot 5^2$ so there are $45$ solutions for positive $b$ and $45$ solutions for negative $b$. So there are $90$ wrong solutions. In total, we have $225+225-90=\boxed{360}$ integer values for $a$ and $b$.
Evaluate the double integral $\int_{-1}^{0} \int_{1}^{2}(x^2y^2 + xy^3)dydx$ $\int_{-1}^{0} \int_{1}^{2}(x^2y^2 + xy^3)dydx$ \begin{align} \int_{-1}^{0} \int_{1}^{2}(x^2y^2 + xy^3)dydx = \int_{-1}^{0} \bigg[\frac{x^2}{3}y^3 + \frac{x}{4}y^4\bigg]_{1}^{2}dx = \\ \int_{-1}^{0} \bigg[ \frac{8x^2}{3} + \frac{16x}{4}\bigg] - \bigg[\frac{x^2}{3} - \frac{x}{4} \bigg]dx = \int_{-1}^{0}\bigg( \frac{7x^2}{3} + \frac{17x}{4}\bigg)dx \\= \bigg[\frac{7}{9}x^3 + \frac{17}{8}x^2 \bigg]_{0}^{-1} = 0 - \bigg[ \frac{-7}{9} - \frac{17}{8}\bigg] = \frac{209}{72} \end{align} This is my work, but the textbook says the answer is $-\frac{79}{72}$. I double checked my work and everything seems to be checking out for me on my end, wasn't sure if the textbook made a mistake or if there is a super small mistake I am missing here.
Here had been the first out of two sign errors: $$\bigg[ \frac{8x^2}{3} + \frac{16x}{4}\bigg] - \bigg[\frac{x^2}{3} + \frac{x}{4} \bigg]$$ See the + before the $\frac{x}{4}$. At the end it should be: $$\bigg[\frac{7}{9}x^3 + \frac{15}{8}x^2 \bigg]_{-1}^{0} = 0 - \bigg[ \frac{-7}{9} + \frac{15}{8}\bigg] = -\frac{79}{72}$$ Note the + instead of the - before the $\frac{15}{8}$.
The textbook is correct. $$\int_{-1}^{0} \int_{1}^{2} x^2y^2 + xy^3\mathop{dy}\mathop{dx} = \int_{-1}^{0} x^2 \cdot \frac{y^3}{3} + x\cdot \frac{y^{4}}{4}\Big|_{y = 1}^{y = 2} \mathop{dx} $$ $$= \int_{-1}^{0} \left(x^2 \cdot \frac{8}{3} + x \cdot 4\right) - \left(x^2 \cdot \frac{1^{3}}{3} + x \cdot \frac{1}{4}\right) \mathop{dx}$$ $$ = \int_{-1}^{0} \frac{7x^2}{3} + \frac{15x}{4} \mathop{dx}$$ $$ = \left(\frac{7x^{3}}{9} + \frac{15x^{2}}{8}\right)\Big|_{x = -1}^{x = 0}$$ $$ = \left(0\right) - \left(-\frac{7 }{9} + \frac{15}{8}\right) $$ $$= \frac{7}{9} - \frac{15}{8} = \boxed{-\frac{209}{72}}$$
Prove that $n^2(n^2+1)(n^2-1)$ is a multiple of $5$ for any integer $n$. Prove that $n^2(n^2+1)(n^2-1)$ is a multiple of $5$ for any integer $n$. I was thinking of using induction, but wasn't really sure how to do it.
Hint: $$ n^2(n^2+1)(n^2−1)\cong n^2(n^2-4)(n^2−1) = (n-2)(n-1)(n^2)(n+1)(n+2) $$
Lemma: For any polynomial of integer coefficients $P(n)$, $n=0,1,2,3,4$ exhaust the possible values of $P(n)\bmod 5$. Indeed for any $a$, $(n+a)\bmod 5=(n\bmod5+a)\bmod5$, and $(na)\bmod5=((n\bmod5)a)\bmod5$, so that $P(n)\bmod5=P(n\bmod 5)\bmod5$. QED. Now, $$0^2(0^4-1)\bmod5=0\\ 1^2(1^4-1)\bmod5=1\cdot0\bmod5=0\\ 2^2(2^4-1)\bmod5=4\cdot15\bmod5=0\\ 3^2(3^4-1)\bmod5=9\cdot80\bmod5=0\\ 4^2(4^4-1)\bmod5=16\cdot255\bmod5=0\\$$
Uniform continuity implies existence of increasing continuous function In the book that I’m reading, the author makes the following assertion, which I was not able to prove: If $c:\mathbb R \times \mathbb R\to \mathbb R$ is a continuous function on a compact set (i.e $c$ restricted to a compact set) and hence uniformly continuous, then there exists an increasing continuous function $w:\mathbb R_+ \to \mathbb R_+$, with $w(0)=0$ and such that $$ \mid c(x,y) - c(x’,y’) \mid \leq w(d(x,x’) + d(y,y’)) $$ Can anyone prove that this is in fact true? P.s: Note that $d$ is a metric.
A friend of mine showed me the solution, so here it is: I'll sove this for $\mathbb R$ instead of $\mathbb R^2$ for sake of simplicity, because the proof is the same. So, I'll change $c:\mathbb R \times \mathbb R \to \mathbb R$ for $f:\mathbb R\to \mathbb R$, also continuous. Let $K$ be the compact and $$ w(z) := \sup_{x,y:d(x,y)\leq z} |f(x) - f(y)|$$ Hence, $|f(x) - f(y)|\leq w(d(x,y))$. Also, it's clear that $w$ is an increasing function. We have to prove that $w$ is continuous, and $w(0)=0$. It's clear that $w(0) = 0$, since $d(x,y) =0 \iff x=y$. The only thing left to prove is the continuity, which we'll prove for $w$ at 0, and a similar proof can be done to other points. Note that since $f$ is continuous on a compact set, it is then uniformly continuous. Therefore, $$\forall \epsilon&gt;0, \exists \delta&gt;0, \quad \text{s.t} \quad d(x,y) \leq \delta \implies |f(x) - f(y)| \leq \epsilon$$ Taking the $\sup$ , we obtain that: $$ \sup_{x,y:d(x,y)\leq \delta}|f(x) - f(y)| = w(\delta) \leq \epsilon $$ This means that $d(x,y)\leq \delta \implies w(\delta) \leq \epsilon$, hence, $\lim_{\delta \to 0}w(\delta) = 0$. Which proves that $w(0) =0$m and that $w$ is continuous on $0$. The only thing left to do is to prove the continuity for the rest of the points, which I could not do actually.
This is not true in general. Consider $c(x,y) = x^2$ and the points $(x, y), (x', y')$ such that $d(x, x') = d(y, y') =a$ in which case the RHS is 0. But $x^2 - x'^2$ is not always 0.
Show that there are polynomials $q(x)$ and $r(x)$ with integer coefficients such that $f(x)=g(x)q(x)+r(x)$ and $\deg(r)<\deg(g)$. Let $f(x), g(x)$ be polynomials with coefficients in $\mathbb Z$. Suppose that $\deg(g)≥1$ and that the leading coefficient of $g$ is $1$. Show that there are polynomials $q(x)$ and $r(x)$ with integer coefficients such that $f(x)=g(x)q(x)+r(x)$ and $\deg(r)&lt;\deg(g)$.
By letting $q=0, r=f$, we see that there are polynomials $q,r$ with integer coefficients such that $f=qg+r$. Among all such pairs of polynomials, pick one that minimizes $\deg r$. I claim that $\deg r&lt;\deg g$. Indeed, if we assume that $\deg r\ge \deg g$ is possible where $g(x)=x^n+a_{n-1}x^{n-1}+\ldots +a_0$ and $r(x)=b_mx^m+\ldots +b_0$ with $m\ge n$ and $b_m\ne 0$, say, then for $q^*(x):=q(x)+b_mx^{m-n}g(x)$ and $r^*(x):=r(x)-x^{m-n}g$ we have $f=q^*g+r^*$ and $\deg r^*&lt;m=\deg r$, contradiction.
You can get $q(x)$ by doing polynomial division of $\frac{f(x)}{g(x)}$. Then you will find out that all coefficients of $r(x)$ with degree$\ge \deg(g)$ is $0$. Therefore $\deg(r)&lt;\deg(g)$ or $r(x)=0$ It's called division algorithm for polynomials.
Circumsphere of a tetrahedron undefined? I am trying to find 3D alpha shapes from my data-set. In doing so, I am keeping only those tetrahedra that have circumradius below a certain threshold. However, while finding the circumradius of the tetrahedra making up the delaunay triangulation of the data, I am finding that for some cases, it becomes imaginary. I am using the standard formulae, given for example, in http://mathworld.wolfram.com/Circumsphere.html. Are these formualae always valid for any (non-degenerate) tetrahedron ? An example of the vertices of a tetrahedra for which it fails: 1.0e+03 * -0.882361572000000 1.832846680000000 8.039920898000000 -0.871205933000000 2.190948975000000 7.713502440999999 -0.874571533000000 1.637495972000000 7.953884766000000 -0.945120239000000 1.753712891000000 8.093748535000000 I am using Matlab here, along with its delaunayn function. Essentially in the formula $r = \frac{ \sqrt{D_x^2 + D_y^2 + D_z^2 - 4 a c} }{2 |a| }$, the discriminant under the square root becomes negative. My computed values are: [$D_x \, D_y \, D_z \, a \, c$] = 1.0e+14 * 0.000152327124454 0.000160507051388 -0.000890797019744 -0.000000057932589 -3.705507881061755
If this was StackOverflow they would say "show me your code." It looks like you've at least done something with the minus sign in calculating Dy. Here's my code where the four vertices of the tetrahedron are stored in the 4-by-3 matrix xyz: w = ones(4,1); s = sum(xyz.^2,2); Dx = det([s xyz(:,2:3) w]) Dy = -det([s xyz(:,[1 3]) w]) Dz = det([s xyz(:,1:2) w]) a = det([xyz w]) c = det([s xyz]) r = 0.5*sqrt(Dx^2+Dy^2+Dz^2-4*a*c)/abs(c) The resulting radius is a positive real value: about 8.72e-6. Yes, the negative Dy goes away when it gets squared (and could be omitted in your case) – did you maybe put the negative inside of the square root? I can't answer your your more mathematical question, but I'd hazard to guess that this formula is fine. I'd think that numerical issues would potentially be a bigger issue. There's potential for overflow in your case if the numbers get bigger.
In my opinion the radius should be: $r = 0.5\times\sqrt{Dx^2+Dy^2+Dz^2-4ac}/|a|\approx 0.5577$ But thanks, you helped me a lot.
Examples of Diophantine equations with a large finite number of solutions I wonder, if there are examples of Diophantine equations (or systems of such equations) with integer coefficients fitting on a few lines that have been proven to have a finite, but really huge number of solutions? Are there ones with so large number of solutions that we cannot write any explicit upper bound for this number using Conway chained arrow notation? Update: I am also interested in equations with few solutions but where a value in a solution is very large itself.
I think the rough answer to your question is that "there are no such natural equations." Let me try to justify this heuristic. Caporaso, Harris, Mazur; and Baker's method: Consider diophantine equations corresponding to curves, for example equations of the form: $$C: f(x,y) = 0$$ where $f(x,y)$ is a polynomial with integer coefficients. One knows that the number of solutions (with $x$ and $y$ in $\mathbf{Z}$ or $\mathbf{Q}$) is --- to some extent --- determined by the complex geometry of $C$. If the genus $g$ of $C$ is greater than two, then Faltings proved that $C$ has only finitely many rational points. However, much more (may) be true. Assuming a conjecture of Lang, it was proved by Caporaso, Harris, and Mazur (J. Amer. Math. Soc. 10 (1997), 1-35) that the number of rational points $\#C(\mathbf{Q})$ is bounded by a function $A(g)$ which only depends on $g$, not on $C$. This doesn't prove that $A(2)$ [for example] is not enormous, but all known lower bounds in $g \ge 2$ on the number of rational points are at most polynomial. For example, I think the largest number of rational points on a genus two curve that has ever been found has at most a few hundred rational points. Since the genus is controlled by the degree of $f$, this suggests that it is highly unlikely to find an equation with small coefficients and degree which has absolutely massive solutions, even if one makes the coefficients big. Note that this is only a heuristic, but it strongly suggests that there are no such equations. It certainly says that no such equations are currently known. One is in slightly better shape (or worse, depending on what you're trying to do) if one restricts to integral points of $C$. For certain classes of equations (say $F(x,y) = m$ for homogenous $F$) there are explicit bounds on the number of integral solutions in terms of the coefficients coming from Baker's theorem (on linear forms in logarithms). The bounds are not great (perhaps super-super exponential in the coefficients), but they certainly preclude numbers of the size you are interested arising from anything that one can sensibly write down. Moreover, one conjecturally expects that Baker's bounds are not optimal. For genus 1 and 0, the situation is similar. There's difference here again between whether one wants to consider only integral solutions or rational solutions, but the result (in either case) is that if one insists that there are only finitely many solutions (in either integers or rationals), then there is a bound for the largest such solution in terms of the coefficients (which, using Baker's method for integral points, might be quite large, but is still tiny compared to the numbers you are discussing). Moral: If one assumes Lang's conjecture (and we certainly don't have any evidence that it is false), there is still heuristic evidence to suggest that the the types of equations you are looking for do not exist, either for curves or higher dimensional varieties: this is very speculative, but certainly means that writing down any such equations is either impossible or very hard. Exponential Diophantine Equations: There are other flavours of Diophantine equations besides algebraic varieties. For example, you could consider exponential Diophantine equations, e.g.: $2^n = x^2 + 23$. In this case, one can often convert solutions to these equations into subsets of polynomial equations of genus at least two. For example, the previous equation gives solutions to one of the five genus three curves: $$Ay^5 = x^2 + 23, \qquad A = 1,2,4,8,16.$$ Thus one can often "reduce" to the case of curves. Alternatively, Baker's method can often be used directly on exponential equations to give upper bounds. The phenomena of a big "smallest" solution: There are some equations which have infinitely many solutions of which the "smallest" is quite large, for example, the first non-trivial solution to Pell's equation $x^2 - 61 y^2 = 1$ has $$[x,y] = [1766319049, 226153980].$$ However, in these cases (and in similar cases coming from elliptic curves) one at least expects (and knows for Pell) that there exists a bound on the regulator which is polynomial in the coefficients, where the regulator involves the logarithm of the entries. So this gives upper bounds for primitive solutions which are exponential in a polynomial in the coefficients, still enough to prevent super-huge numbers arising as the smallest interesting solution. Summary: There are many other flavours of Diophantine equation one can write down, but I think the summary is that the conjectures of Lang suggest, on some heuristic level, that the type of phenomena you are looking for simply does not occur naturally. There may be a way of embedding Ackerman's function in to some system of equations (but not into polynomial equations), but I think you would consider that cheating.
Let $b$ is a non-zero integer, and let $n$ is a positive integer. The equation $y(x-b)=x^n$ has only finitely many integer solutions. The first solution is: $x=b+b^n$ and $y=(1+b^{n-1})^n$. The second solution is: $x=b-b^n$ and $y=-(1-b^{n-1})^n$, cf. [1, page 7, Theorem 9] and [2, page 709, Theorem 2]. The number of integer solutions to $(y(x-2)-x^{\textstyle 2^n})^2+(x^{\textstyle 2^n}-s^2-t^2-u^2)^2=0$ grows quickly with $n$, see [3]. References [1] A. Tyszka, A conjecture on rational arithmetic which allows us to compute an upper bound for the heights of rational solutions of a Diophantine equation with a finite number of solutions, http://arxiv.org/abs/1511.06689 [2] A. Tyszka, A hypothetical way to compute an upper bound for the heights of solutions of a Diophantine equation with a finite number of solutions, Proceedings of the 2015 Federated Conference on Computer Science and Information Systems (eds. M. Ganzha, L. Maciaszek, M. Paprzycki), Annals of Computer Science and Information Systems, vol. 5, 709-716, IEEE Computer Society Press, 2015. [3] A. Tyszka, On systems of Diophantine equations with a large number of integer solutions, http://arxiv.org/abs/1511.04004
For Orthogonal Unit Vectors, Prove that Let $A$ be an $n \times n$ matrix. Prove that for $n\geq 2$ there exist $n$ orthogonal unit vectors $u_1,\ldots,u_n$ in $\mathbb{R^n}$ such that $Au_1,\ldots,Au_n$ are also orthogonal. As I recall this is a theorem but couldnt find its proof. Can someone help me please.
This seems just a direct consequence of the Singular Value Decomposition for any matrix. In fact, $A=UDV$, where $D$ is diagonal and $U,V$ are orthogonal. As a consequence, $AV^T = UD$, but the columns of both $V^T$ and $UD$ are orthogonal.
This seems just a direct consequence of the Singular Value Decomposition for any matrix. In fact, $A=UDV$, where $D$ is diagonal and $U,V$ are orthogonal. As a consequence, $AV^T = UD$, but the columns of both $V^T$ and $UD$ are orthogonal.
Why isn't several complex variables as fundamental as multivariable calculus? One typically studies analysis in $\mathbb{R}^n$ after studying analysis in $\mathbb{R}$. Why can't the same be said of $\mathbb{C}$?
I would say that the main reason that several complex variables is rarely seen in the undergraduate curriculum (and even not that often in the graduate curriculum unless the department has some specialists in SCV) is that you can't get very far without lots of prerequisites. You can for example start by proving Cauchy's integral formula for a polydisc, and from that Liouville's theorem and a few other well known results from one complex variable quickly follow. From Cauchy's integral forumla, it also follows that holomorphic functions of several variables admit power series expansions (but the domain of convergence is not usually a ball in $\mathbb{C}^n$: compare $\sum_{j,k} z^j w^k$, $\sum_{k} (z+w)^k$ and $\sum_{k} z^k w^k$ for a few examples of what might happen). From here you can go on and study logarithmically convex Reinhardt domains. Note, however, that the mere definition of a holomorphic function in several variables is a little problematic. You want to say that a function is holomorphic if is is holomorphic in each variable separately, but to show that this is equivalent to other plausible definitions (without assuming that the function is for example locally bounded or jointly continuous) is surprisingly difficult. You may even get as far as showing a version of Hartogs' extension theorem: If $\Omega$ is a domain in $\mathbb{C^n}$ and $K$ is a compact subset such that $\Omega \setminus K$ is connected, every holomorphic function on $\Omega\setminus K$ extends to $\Omega$. (Here $n &gt; 1$, of course.) I think this is about how far you can get without bringing in tools from PDE, potential theory, algebra (sheaf theory), functional analysis, differential geometry, distribution theory and probably a few more fields. The big highlight in a first course in several complex variables is usually to solve the Levi problem, i.e. to characterize the domains of existence for holomorphic functions. (Hartogs' extension theorem shows that some domains are unnatural to study, since all holomorphic functions extend to a bigger domain.) This is usually done with Hörmanders $L^2$-solution of the $\bar\partial$-equation. (Or via sheaf theory a la Oka.) While it's not strictly necessary to have a modern PDE course as a prerequisite, it's certainly valuable. At the very least you need to know some functional analysis (and preferably some potential theory as well), including some exposure to unbounded linear operators on Hilbert spaces to be able to understand the Hörmander solution. (For the sheaf theory solution, you need a healthy background in algebra instead.) Similarly, you need some differential geometry (at least familiarity with differential forms and tangent bundles) to understand the more complicated integral formulas such as Bochner-Martnielli's formula and the geometric aspects of pseduoconvexity, which is central for a deeper understanding of SCV. In fact, the interplay between the complex geometry of the domain and the corresponding function theory is a reoccuring theme in SCV. Function theory in strictly pseudoconvex domains, for example, look rather different from function theory in weakly pseudoconvex domains. (Many finer points concerning weakly pseudoconvex domains are still open problems.) Summing up, to do a real meaningful course in SCV, you really need more background than what is reasonable to expect from an undergraduate. After all SCV is really a 20th century field of mathematics! The Levi problem for example wasn't solved until the early 50's (Hörmander's solution is as late as 1965).
You need to know concepts from analysis on $\mathbb R$ and $\mathbb R^2$ including power series, derivatives, line integrals, functions, analytic geometry, and basic topology of those real (metric) spaces before you talk about analytic functions from $\mathbb C$ to $\mathbb C$. That is the main reason why you see analysis on real spaces before you get to analysis on $\mathbb C$. $\mathbb R^2$ with cartesian and polar coordinates is $\mathbb C^1$. And of course you do $\mathbb C^1$ before analysis on higher dimensional $\mathbb C$-spaces.
Proper definition use in Stoke's theorem Let the curve C be a piecewise smooth and simple closed curve enclosing a region, D. Some sources asserts Stoke's theorem to be: $$\oint_{C} F.dr = \iint_{R}\nabla \times FdS$$ Whereas, some claims it to be $$\oint_{C} F.dr = \iint_{R}\nabla \times F.n.dS$$ Could someone clear the air as to which of the above definition used is correct? Thanks in advance.
From the first of the statements, the surface integral is written as $$\iint_R \nabla\times \vec F \cdot d\vec S$$ where I have added overarrows to clarify vector quantities. In particular, $d\vec S$ means integrating over the surface in the direction of the unit normal. In the second statement the surface integral is written as $$\iint_R \nabla\times\vec F \cdot \hat n \ dS$$ where $\hat n$ is the unit normal and the integral now is over the surface treated as a 'scalar'. Loosely speaking, $d\vec S = \hat n \, dS$. In other words, the two statements properly understood are equivalent.
From the first of the statements, the surface integral is written as $$\iint_R \nabla\times \vec F \cdot d\vec S$$ where I have added overarrows to clarify vector quantities. In particular, $d\vec S$ means integrating over the surface in the direction of the unit normal. In the second statement the surface integral is written as $$\iint_R \nabla\times\vec F \cdot \hat n \ dS$$ where $\hat n$ is the unit normal and the integral now is over the surface treated as a 'scalar'. Loosely speaking, $d\vec S = \hat n \, dS$. In other words, the two statements properly understood are equivalent.
Give two integers $m$ and $n$ such that $n^2$ is a multiple of $m$, $n$ is not a multiple of $m$ and $n > m$ Give two integers $m$ and $n$ such that $n^2$ is a multiple of $m$, $n$ is not a multiple of $m$ and $n &gt; m.$ I think that such two integers don't exist, but even if that were true, I'm having trouble with proving that. I have found clear examples when I neglect the last condition ($n &gt; m$). Any help on this problem would be much appreciated. The observation that I did make is the fact that since $n^2$ is a multiple of $m$ we can write it as $k * m$, which means that $k$ has to be larger than $n$. I think that could be a starting point of some sort, but I wasn't capable of going any further.
In general pick $n=pq$, where $p$ is a prime and let $m=p^2$, where $q$ is also a prime such that $p&lt;q$. This satisfies all your requirements.
Given a squarefree $a$ and $c&gt;b&gt;1$ such that $b$ is not a factor of $c,$ then we can define $m=ab^2$ need $n=abc.$ Then $n=abc&gt;ab^2=m,$ and $n^2=a^2b^2c^2$ is divisible by $m=ab^2$, and $\frac{n}{m}=\frac{abc}{ab^2}=\frac{c}{b}$ is not an integer since $c$ is not divisible by $b.$ On the other hand, given any two $m,n$ satisfying these conditions, we can find $a,b,c.$ If $m$ is square-free, then $m\mid n^2$ implies $m\mid n.$ (Why?) So $m$ is not square-free. This means that we can write (uniquely) $m=ab^2$ where $a$ is square-free and $b&gt;1.$ Now, since $ab^2\mid n^2,$ with $a$ square-free, then then $ab\mid n.$ (Why?) Writing $n=abc$ for some $c,$ then we then see $n&gt;m$ means $c&gt;b.$ We also see that that $m=ab^2$ is a factor of $n=abc$ if and only if $b$ is a factor of $c.$ So this construction gives all $m,n$ uniquely: Start with $a$ square-free, and $c&gt;b&gt;1$ with $c$ not a multiple of $b.$ Then take $m=ab^2,n=abc.$ Simple cases: When $n=1$ and $b&gt;1$ we can choose $c=b+1$ then we get $m=b^2$ and $n=b(b+1).$
Prove by induction that $(n+1)^2 + (n+2)^2 + ... + (2n)^2 = \frac{n(2n+1)(7n+1)}{6}$ Prove by induction that $$(n+1)^2 + (n+2)^2 + ... + (2n)^2 = \frac{n(2n+1)(7n+1)}{6}.$$ I got up to: $n=1$ is true, and assuming $n=k$ prove for $n=k+1$. Prove... $$\frac{k(2k+1)(7k+1)+6(2(k+1))^2}{6} = \frac{(k+1)(2k+3)(7k+8)}{6}$$ I keep trying to expand to $6(2k+2)^2$ and factorising but I end up being short on one factor, e.g., I end up with $\frac{(k+1)(2k+3)(7k+6)}{6}$.
Alternative route (also with induction) You can start proving inductively that: $$\sum_{k=1}^{n}k^{2}=\frac{1}{6}n\left(n+1\right)\left(2n+1\right)$$ Then have a look at: $$\sum_{k=n+1}^{2n}k^{2}=\sum_{k=1}^{2n}k^{2}-\sum_{k=1}^{n}k^{2}$$
Inductive step spelled out in detail: \begin{align*} &amp;\hphantom{=}\frac{n(2n+1)(7n+1)}6+(2n+1)^2+(2n+2)^2-(n+1)^2\\ &amp;=\frac{n(2n+1)(7n+1)}6+(2n+1)^2+3(n+1)^2\\ &amp;=(2n+1)\left(\frac{n(7n+1)}6+2n+1\right)+3(n+1)^2\\ &amp;=(2n+1)\frac{n(7n+1)+12n+6}6+3(n+1)^2\\ &amp;=(2n+1)\frac{7n^2+13n+6 }6+3(n+1)^2\\ &amp;=\frac{(2n+1)(n+1)(7n+6)}6+3(n+1)^2\\ &amp;=(n+1)\left(\frac{(2n+1)(7n+6)}6+3(n+1)\right)\\ &amp;=(n+1)\frac{(2n+1)(7n+6)+18(n+1)}6\\ &amp;=(n+1)\frac{14n^2+19n+6+18n+18}6\\ &amp;=(n+1)\frac{14n^2+37n+24}6\\ &amp;=\frac{(n+1)(2n+3)(7n+8)}6 \end{align*}
Limit vs interior definition of continuity Suppose I have two topological spaces $X$ and $Y$ whose topologies are defined by interior operators $\text{int}_X$ and $\text{int}_Y$ respectively, as well as a function $f$ with domain $I$ (for input) and codomain $O$ (for output). Here's where I hit my quandary: I see two potential routes for defining continuity. I could say the function $f: I \rightarrow O$ is continuous at the point $\tilde{x}$ iff $\lim_{x \rightarrow \tilde{x}} f(x) = f(\tilde{x})$, then say it's continuous everywhere in $I$ if it's continuous at every point $\tilde{x}$ in $I$. Or, I could say the function $f: I \rightarrow O$ is continuous iff $f^{1-}\left[\text{int}_X(A)\right]$ is a subset of $\text{int}_Y \left( f^{1-}[A] \right)$ for any $A \subseteq O$. My question is: are these two definitions equivalent? Could someone give me some insight into their equivalence or lack thereof? (I'm assuming the limits are defined with respect to the topologies of $\text{int}_X$ and $\text{int}_Y$.)
Following article is giving the proof you're looking for: Continuous functions and convergent sequences
Following article is giving the proof you're looking for: Continuous functions and convergent sequences
Uniform convergence and sequence of functions I was asked to prove that $f_n(x) = (1-x)^{\alpha}x^n$ is uniformly convergent on $[0,1]$ where $\alpha&gt;0$ I did this part then I was asked if wether the corresponding series of functions converges uniformly on the given interval ? And if it does for what alpha it should be ? Hints are really appreciated ? Thanks
Let $S_n(x)=\sum _{k=0}^n(1-x)^\alpha x^k=(1-x)^\alpha \sum_{k=0}^nx^k=(1-x)^\alpha\dfrac{1-x^n}{1-x}=(1-x)^{\alpha-1}(1-x^n)$ $S(x)=\lim_{n\to\infty}S_n(x)=(1-x)^{\alpha-1}(1-x^n)$ which can be checked to be discontinuous irrespective of $\alpha$
Let $S_n(x)=\sum _{k=0}^n(1-x)^\alpha x^k=(1-x)^\alpha \sum_{k=0}^nx^k=(1-x)^\alpha\dfrac{1-x^n}{1-x}=(1-x)^{\alpha-1}(1-x^n)$ $S(x)=\lim_{n\to\infty}S_n(x)=(1-x)^{\alpha-1}(1-x^n)$ which can be checked to be discontinuous irrespective of $\alpha$
Differentiability: Partially Defined Functions Reference These ideas came to my mind while reading Lee's Introduction to Smooth Manifolds (cf. p. 45). Definition Let $E$ and $F$ be two Banach spaces together with a plain subset $A\subseteq E$. Here, a partially defined function $f:A\to F$ is called differentiable at $a\in A$ if it admits an extension $\bar{f}_a:E\to F$ differentiable at $a$. Remarks Note the dependence of the extension on the point under consideration: $\bar{f}_a$ Also a function $f:U\to F$ with open domain $U$ is differentiable at $u\in U$ in the definition given above iff it is differentiable there in the ordinary sense. The leading principle of this approach to differentiability is that a linear approximation foots on linear spaces. Plain subsets or opens in general aren't! Problems (Riesz-Dunford Functional Calculus) (resolved!) Let a function $f:A\to F$ be (continuously) differentiable in $A$ in the definition given above. Does it necessarily admit an extension $\bar{f}:E\to F$ that happens to be (continuously) differentiable on some whole neighborhood $U_A$ of $A$ rather than merely on $A$? (Manifolds with Boundary) Let a function $f:A\to F$ be (continuously) differentiable in $A$ in the definition given above. Does it necessarily admit an extension $\bar{f}:A\to F$ that happens to be (continuously) differentiable at every point $a\in A$ simultaneously rather than for every point a separate extension $\bar{f}_a:E\to F$? Explanation (Riesz-Dunford Functional Calculus) The Riesz-Dunford Calculus applies only to functions that happen to be holomorphic on some neighborhood of the spectrum of an operator. A positive result here would pin the problem to holomorphic functions on the spectrum precisely. (Manifolds with Boundary) On manifolds a map is differentiable on the boundary iff its coordinate expression has one-sided directional derivatives within half space. A negative result here would complicate the situation alot. Moreover, the definition given in Lee's book for differentiability of partially defined functions slightly varies from the one given above to the extend that it requires the existence of a common extension. The lack, however, there is that though differentiability is a local property it is defined pointwise. So from a structural point the definition given above shows consistency while for practical purposes the definition given in Lee's book is favourable. A positive result here would unveil them as equivalent and therefore justify the approach. Attempts (Riesz-Dunford Functional Calculus) (Manifolds with Boundary) For some function on half space $f:\mathbb{H}^m\to\mathbb{R}^n$ to be differentiable in the sense given above it must hold that locally at specific points it extends infinitesimally as: $$F_E(a_0+v):=2F(a_0)-F(a_0-v),v\notin \mathbb{H}^n$$ while globally at all points it extends infinitesimally as: $$F_E(a-n):=2F(a)-F(a+n),n\bot\partial\mathbb{H}^n$$ These guiding constructions seem to clash. But this still requires a rigorous counterexample.
1.(Riesz-Dunford Functional Calculus) Consider the function $f(z):=|z|^2$ defined on the real and imaginary axis only. Then around every point it has an extension to a continuously differentiable function within some neighborhood. But that extension is confined to the Cauchy Riemann equations and therefore it must be $f(z)=+z^2$ and $f(z)=-z^2$ simultaneously in every neighborhood of zero which is impossible. So the answer to the first problem is: No, in general there won't be an extension continuously differentiable in a whole neighborhood. 2.(Manifolds with Boundary) Besides this example still doesn't resolve(!) the second problem as one can choose the following extension:
1.(Riesz-Dunford Functional Calculus) Consider the function $f(z):=|z|^2$ defined on the real and imaginary axis only. Then around every point it has an extension to a continuously differentiable function within some neighborhood. But that extension is confined to the Cauchy Riemann equations and therefore it must be $f(z)=+z^2$ and $f(z)=-z^2$ simultaneously in every neighborhood of zero which is impossible. So the answer to the first problem is: No, in general there won't be an extension continuously differentiable in a whole neighborhood. 2.(Manifolds with Boundary) Besides this example still doesn't resolve(!) the second problem as one can choose the following extension:
Inverse Fourier Transform of Fourier Transform Given the Fourier Tranform defined as $$\color{red}{\mathcal{F}\{}\color{green}{f(}t\color{green}{)}\color{red}{\}}(\xi):=\int_{-\infty}^{+\infty}\color{green}{f(}t\color{green}{)}e^{-2\pi i \xi t}dt=\color{#F0F}{\hat{f}(}\xi\color{#F0F}{)}$$ They say inverse Fourier Transform is given by $$\color{blue}{\mathcal{F}^{-1}\{}\color{#F0F}{\hat{f}(}\xi\color{#F0F}{)}\color{blue}{\}}(t)=\int_{-\infty}^{+\infty}\color{#F0F}{\hat{f}(}\xi\color{#F0F}{)}e^{2\pi i \xi t}d\xi=\color{green}{f(}t\color{green}{)}$$ I'm expecting that $$\color{blue}{\mathcal{F}^{-1}\{}\color{red}{\mathcal{F}\{}\color{green}{f(}t\color{green}{)}\color{red}{\}}(\xi)\color{blue}{\}}(t)=\color{green}{f(}t\color{green}{)}$$ But actually $$\color{blue}{\mathcal{F}^{-1}\{}\color{red}{\mathcal{F}\{}\color{green}{f(}t\color{green}{)}\color{red}{\}}(\xi)\color{blue}{\}}(t)=\int_{-\infty}^{+\infty}(\int_{-\infty}^{+\infty}\color{green}{f(}t\color{green}{)}e^{-2\pi i \xi t}dt)e^{2\pi i \xi t}d\xi=$$$$=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}\color{green}{f(}t\color{green}{)}dtd\xi=\int_{-\infty}^{+\infty}(\int_{-\infty}^{+\infty}d\xi)\color{green}{f(}t\color{green}{)}dt=\int_{-\infty}^{+\infty}0\cdot \color{green}{f(}t\color{green}{)}dt=0 \text{ ??}$$
Let $f \in L^1(\mathbb{R})$ so that $\hat{f}(\xi) = \int_{-\infty}^{\infty} f(x) \, e^{-ix\xi} \, dx$ is defined. Then $$ |\hat{f}(\xi)| = \left|\int_{-\infty}^{\infty} f(x) \, e^{-ix\xi} dx\right| \leq \int_{-\infty}^{\infty} \left|f(x) \, e^{-ix\xi}\right| \, dx = \int_{-\infty}^{\infty} \left|f(x) \right| \, dx &lt; \infty $$ so $\hat{f}\in L^\infty(\mathbb{R})$ but we cannot be sure that $\hat{f} \in L^1(\mathbb{R}).$ Therefore, let us define $\hat{f_\epsilon}(\xi) := e^{-\epsilon \xi^2/2}\hat{f}(\xi)$ so that $\hat{f_\epsilon} \in L^1(\mathbb{R}),$ and at the end take the limit $\epsilon\to 0.$ We then have \begin{align} \int_{-\infty}^{\infty} \hat{f_\epsilon}(\xi) \, e^{iy\xi} \, d\xi &amp;= \int_{-\infty}^{\infty} e^{-\epsilon \xi^2/2}\hat{f}(\xi) \, e^{iy\xi} \, d\xi \\ &amp;= \int_{-\infty}^{\infty} e^{-\epsilon \xi^2/2} \left( \int_{-\infty}^{\infty} f(x) \, e^{-ix\xi} \, dx \right) \, e^{iy\xi} \, d\xi \\ \\ &amp;= \int_{-\infty}^{\infty} f(x) \left( \int_{-\infty}^{\infty} e^{-\epsilon \xi^2/2} \, e^{-ix\xi} \, e^{iy\xi} \, d\xi \right) \, dx \\ &amp;= \int_{-\infty}^{\infty} f(x) \sqrt{\frac{2\pi}{\epsilon}} e^{-(x-y)^2/(2\epsilon)} \, dx \\ \\ &amp;\overset{\{ x=y+z\sqrt{2\epsilon} \}}{=} \int_{-\infty}^{\infty} f(y+z\sqrt{2\epsilon}) \sqrt{\frac{2\pi}{\epsilon}} e^{-z^2} \, \sqrt{2\epsilon} \, dz \\ &amp;= 2\sqrt{\pi} \int_{-\infty}^{\infty} f(y+z\sqrt{2\epsilon}) \, e^{-z^2} \, dz \\ &amp;\to 2\sqrt{\pi} \int_{-\infty}^{\infty} f(y) \, e^{-z^2} \, dz = 2\sqrt{\pi} \int_{-\infty}^{\infty} \, e^{-z^2} \, dz \, f(y) = 2\pi \, f(y). \end{align}
The first integral should be \begin{align} I(u) &amp;= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(t)e^{-i2\pi\xi t}e^{i2\pi\xi u}dtd\xi\\ &amp;=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(t)e^{-i2\pi\xi t+i2\pi\xi u}dtd\xi\\ &amp;=\int_{-\infty}^{\infty}f(t)\left(\int_{-\infty}^{\infty}e^{i2\pi\xi (u-t)}d\xi\right)dt\\ \end{align} We have $$\delta(u-t) = \frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i(u-t)x}dx = \int_{-\infty}^{\infty}e^{i2\pi(u-t)\xi}d\xi $$ Thereby it becomes \begin{align} I(u) &amp;= \int_{-\infty}^{\infty}f(t)\delta(u-t)dt\\ &amp;= f(u) \end{align} The function as mapping is returned, doesn't matter how its argument is denoted
Product rule for partial derivatives I am going through the solution for a problem (1.7 from Goldstein's Classical Mechanics) where it says: I don't understand why the right-hand side of the second line only contains 4 terms when there should be 5. The very last term on line 1 has been expanded into 1 term on line 2 using the product rule, but according to the product rule there should be 2 terms.
$q$, $\dot{q}$ and $\ddot{q}$ are being treated here as separate variables, so $\dfrac{\partial \ddot{q}}{\partial \dot{q}} = 0$.
I think this results from: $$ {\frac {\partial {\dot q}} {\partial {\dot q}}} = 1 $$ Hence: $$ \frac {\partial {\ddot {q} }} {\partial \dot q} = \frac {\frac {\partial {\dot q}} {\partial t}} { \partial {\dot q} } = \frac {\partial( \frac {\partial {\dot q}} {\partial q}) }{\partial t} = \frac {\partial 1} {\partial t} = 0 $$